start portlet menu bar

HCLSoftware: Fueling the Digital+ Economy

Display portlet menu
end portlet menu bar
Close
Select Page

Artificial Intelligence (AI) has become a ubiquitous topic of discussion, transitioning from its portrayal in 1980s sci-fi movies like "Terminator" and "2001: A Space Odyssey" to the practical tools we encounter in our daily lives, such as ChatGPT. However, amidst the excitement surrounding AI, there exist widespread misconceptions about its capabilities and limitations.

This is particularly noticeable in the domain of application security, where an increasing number of AI tools are emerging to write and evaluate code for developers. It is imperative to establish criteria for trusting the outputs of AI in this context and to identify areas where human intervention remains necessary.

These topics are discussed at length in Discerning reality from the hype around AI, a recent David Rubinstein article in SDTimes. Rubinstein interviews three application security experts from HCLSoftware to better understand the realities of what AI can do to help with secure coding, and also learn about HCLSoftware’s innovative approach to the use of AI.

Kristofer Duer, Lead Cognitive Researcher at HCLSoftware, contends that, “[AI] doesn't have discernment yet… What it can do well is pattern matching; it can pluck out the commonalities in collections of data." Organizations are using generative AI and large language models to match patterns and make large amounts of data more easily consumable by humans. ChatGPT, for instance, can now be used to both write code and identify security issues in code it is given to review.

Navigating Trust in AI: The Confidence Dilemma

In the realm of Artificial Intelligence (AI), trust is paramount. Yet, as AI excels in problem identification, there arises a significant concern. When AI errs, it does so with unwavering confidence. As Duer reiterates, “... when ChatGPT is wrong, it's confident about being wrong.” That is why over-reliance on generative AI by developers is a real concern for Colin Bell, the HCL AppScan CTO at HCLSoftware. As more developers are using software like Meta's Code Llama and Google's Copilot to develop applications, exponentially more code is being written without anyone asking whether that code can be trusted to be secure. Bell cautions that, “...AI is probably creating more work for application security, because there's more code getting generated."

Everyone interviewed agreed that there continues to be a real need for humans to audit the code throughout the software development lifecycle. While AI can assist in the writing of code, one of its more impactful security uses is in pointing out the places in the code where human security experts should focus their time and attention. This assistance can add up to significant time and resource savings.

AI and HCL AppScan

The two AI processes used by HCLSoftware in their HCL AppScan portfolio were developed over many years with this goal in mind. Intelligent finding analytics (IFA) is used to limit the amount of findings presented to the user. Intelligent code analytics (ICA), helps the user determine what the security information of methods might be, or APIs.

IFA, for example, has been trained to look at AppSec scan results with the same criteria as a security expert and ignore the results that a human tester would find boring (results that don’t represent real risk to the application). In this way, the number of findings is automatically reduced dramatically by the AI so that humans can focus on triaging only the most critical vulnerabilities. Duer said, “[IFA] automatically saves real humans countless hours of work. In one of our more famous examples, we took an assessment with over 400,000 findings down to roughly 400 a human would need to review.”

Auto-Remediation: The Holy Grail

Significant inroads are also being made into using AI to fix vulnerabilities in code, known as auto-remediation. Rob Cuddy, Customer Experience Executive at HCLSoftware, is excited about the possibilities this technology presents but expressed concerns over liability that mirrored his colleagues’ broader trust issues. "Let's say you're an auto-remediation vendor, and you're supplying fixes and recommendations, and now someone adopts those into their code, and it's breached. Whose fault is it?”

Much of this conversation about AI and trust reflects a broader directional change that organizations are making towards application security posture management (ASPM). The key in ASPM is how to most effectively manage risk across the entire software landscape. The future implementation of AI will be shaped by balancing efficiency with trust and how it all successfully fits into this risk-management model.

Read the complete article here.

Visit HCL AppScan to learn more about the AI and Machine Learning capabilities available today in the HCL AppScan solutions.

Comment wrap
Secure DevOps | August 21, 2024
A Day of Speed and Indulgence: HCL Appscan's Ferrari Track Laps Experience
Experience the thrill of a Ferrari track day, gourmet Italian cuisine, and insights into HCLSoftware's solutions. Relive the unforgettable memories of speed, luxury, and innovation at our exclusive event in Maranello.
Secure DevOps | August 20, 2024
Streamlining Security: Integrating HCL AppScan with Maven and Gradle
Introducing HCL AppScan Maven & Gradle plugins: Seamlessly integrate security testing into your development workflow for early vulnerability detection and enhanced code quality.
Secure DevOps | August 20, 2024
DAST for Developers: Enhanced Application Security from HCL AppScan
Empower your developers to embrace application security with HCL AppScan's easy-to-use DAST solutions. Integrate seamlessly, find vulnerabilities early, and automate testing for faster, more secure software releases. Try it free today!