start portlet menu bar

HCLSoftware: Fueling the Digital+ Economy

Display portlet menu
end portlet menu bar
Close
Select Page

Artificial Intelligence (AI) has become a ubiquitous topic of discussion, transitioning from its portrayal in 1980s sci-fi movies like "Terminator" and "2001: A Space Odyssey" to the practical tools we encounter in our daily lives, such as ChatGPT. However, amidst the excitement surrounding AI, there exist widespread misconceptions about its capabilities and limitations.

This is particularly noticeable in the domain of application security, where an increasing number of AI tools are emerging to write and evaluate code for developers. It is imperative to establish criteria for trusting the outputs of AI in this context and to identify areas where human intervention remains necessary.

These topics are discussed at length in Discerning reality from the hype around AI, a recent David Rubinstein article in SDTimes. Rubinstein interviews three application security experts from HCLSoftware to better understand the realities of what AI can do to help with secure coding, and also learn about HCLSoftware’s innovative approach to the use of AI.

Kristofer Duer, Lead Cognitive Researcher at HCLSoftware, contends that, “[AI] doesn't have discernment yet… What it can do well is pattern matching; it can pluck out the commonalities in collections of data." Organizations are using generative AI and large language models to match patterns and make large amounts of data more easily consumable by humans. ChatGPT, for instance, can now be used to both write code and identify security issues in code it is given to review.

Navigating Trust in AI: The Confidence Dilemma

In the realm of Artificial Intelligence (AI), trust is paramount. Yet, as AI excels in problem identification, there arises a significant concern. When AI errs, it does so with unwavering confidence. As Duer reiterates, “... when ChatGPT is wrong, it's confident about being wrong.” That is why over-reliance on generative AI by developers is a real concern for Colin Bell, the HCL AppScan CTO at HCLSoftware. As more developers are using software like Meta's Code Llama and Google's Copilot to develop applications, exponentially more code is being written without anyone asking whether that code can be trusted to be secure. Bell cautions that, “...AI is probably creating more work for application security, because there's more code getting generated."

Everyone interviewed agreed that there continues to be a real need for humans to audit the code throughout the software development lifecycle. While AI can assist in the writing of code, one of its more impactful security uses is in pointing out the places in the code where human security experts should focus their time and attention. This assistance can add up to significant time and resource savings.

AI and HCL AppScan

The two AI processes used by HCLSoftware in their HCL AppScan portfolio were developed over many years with this goal in mind. Intelligent finding analytics (IFA) is used to limit the amount of findings presented to the user. Intelligent code analytics (ICA), helps the user determine what the security information of methods might be, or APIs.

IFA, for example, has been trained to look at AppSec scan results with the same criteria as a security expert and ignore the results that a human tester would find boring (results that don’t represent real risk to the application). In this way, the number of findings is automatically reduced dramatically by the AI so that humans can focus on triaging only the most critical vulnerabilities. Duer said, “[IFA] automatically saves real humans countless hours of work. In one of our more famous examples, we took an assessment with over 400,000 findings down to roughly 400 a human would need to review.”

Auto-Remediation: The Holy Grail

Significant inroads are also being made into using AI to fix vulnerabilities in code, known as auto-remediation. Rob Cuddy, Customer Experience Executive at HCLSoftware, is excited about the possibilities this technology presents but expressed concerns over liability that mirrored his colleagues’ broader trust issues. "Let's say you're an auto-remediation vendor, and you're supplying fixes and recommendations, and now someone adopts those into their code, and it's breached. Whose fault is it?”

Much of this conversation about AI and trust reflects a broader directional change that organizations are making towards application security posture management (ASPM). The key in ASPM is how to most effectively manage risk across the entire software landscape. The future implementation of AI will be shaped by balancing efficiency with trust and how it all successfully fits into this risk-management model.

Read the complete article here.

Visit HCL AppScan to learn more about the AI and Machine Learning capabilities available today in the HCL AppScan solutions.

Comment wrap

Start a Conversation with Us

We’re here to help you find the right solutions and support you in achieving your business goals.

  |  December 12, 2024
Building Resilient Applications with AST and ASPM: A Dual Defense Strategy
Learn how Application Security Testing (AST) and Application Security Posture Management (ASPM) work together to secure your applications in the Digital+ world. Download HCLSoftware's free eGuide to get started.
  |  December 5, 2024
How Cryptocurrency and Blockchain are Reshaping Supply Chain Security
Discover how cryptocurrency and blockchain enhance supply chain security with tamper-proof ledgers, instant payments, and smart contracts. Improve efficiency and trust.
  |  November 27, 2024
The Hidden Cost of Security Fixes for Software Developers
Developers spend up to 19% of their time on security tasks, costing companies $28K per developer annually. Learn how to reduce this burden and improve your application security posture with HCL AppScan.