start portlet menu bar

HCLSoftware: Fueling the Digital+ Economy

Display portlet menu
end portlet menu bar
Close
Select Page

This is part two of our two-part series on remediation and the use of artificial intelligence (AI). In part one we introduce the other primary approach to assisting developers with fixing vulnerabilities; traditional remediation and education. 

Curated Automatic Fixes

Curated automatic fixes are created by human security experts to address known vulnerabilities found in source code. For simple issues, these fixes can be applied almost instantaneously, even at the level of IDE autocompletion. This approach has been a “tried and true'” method, utilized for years to address various security and non-security issues.

The advantage of these fixes is that they are tested and verified, providing a reliable way to remediate issues without introducing new problems. The consistency of these fixes ensures that the same issue is always resolved in the same manner, enhancing code quality and maintainability.

Such tools provide not only immediate solutions but also educational value, as they are often accompanied by explanations that describe how the fixes resolve the issues. This dual benefit of remediation and learning helps developers understand and prevent similar issues in the future.

If curated automatic fixes have a downside, it is the challenge of their scalability. Since these fixes are hand-crafted, they require significant effort to implement and test for each language across multiple examples and use cases. This labor-intensive process limits the ability to scale the approach to a broader range of issues and languages. Despite this challenge, the effectiveness and reliability of curated automatic fixes make them a valuable tool in a developer's arsenal, particularly for well-known and recurring issues.

GenAI-Driven Automatic Fixes

The third approach to remediation, and one of the most promising, leverages Generative AI (GenAI) to create on-demand automatic fixes. This technology has the potential to offer exceptional coverage and versatility, providing fixes for issues in virtually any programming language. 

With GenAI, developers can access automated, on-the-fly solutions that can adapt to various coding environments and requirements, significantly enhancing the efficiency and effectiveness of the remediation process. One of the goals of this method is that with clever prompt-engineering, these AI-driven solutions should be able to generate context-aware responses that address specific problems effectively. This approach promises relatively quick implementation and scaling, making it feasible to handle a wide array of coding issues. 

Security Challenges Resulting from GenAI

GenAI makes use of an underlying AI model such as a large language model (LLM) to generate an answer even when it doesn't have sufficient information, which can sometimes result in "hallucinations"—responses that are inaccurate or nonsensical. Because of this there is no guarantee on the quality of the fix without oversight: 

  • No guarantee that the solution actually fixes the issue
  • No guarantee that the ‘fixed’ code still functions as intended
  • No guarantee that the solution doesn’t introduce new vulnerabilities
  • No guarantee that the fix will be consistently the same every time, even for the exact same issue and even the same code

GenAI results are dependent on the prompts used, as well as on the quality of the data that the Gen AI model was trained on. While pre-trained models like ChatGPT or Claude are easy to use, these models aren’t always trained exclusively on code, let alone on verifiably secure code. 

Fine tuning an existing model is difficult and still doesn’t ensure the model ‘understands’ the vulnerability it’s trying to fix. Training a model from scratch is even more difficult, can be extremely costly, and requires enormous amounts of data. Additionally, any trained model would need to be retrained for every newly discovered vulnerability making scalability nearly impossible.

Choosing the Right Approach to Auto-Remediation

When reviewing the three approaches to remediation and the use of AI, it is clear that while GenAI autofix is a promising direction, there are pitfalls. Organizations still need to maintain manual oversight as a method of checks and balances to ensure they are releasing secure code.

Education, as discussed in part one of this series, is always the best approach to closing the skills gap and making sure that developers are trained to write secure code from the start. And AI has a role to play here as well. HCL AppScan has been exploring the use of GenAI and  LLMs to make the education more actionable by providing users with easy-to-understand summaries of more complex remediation advisories.

Despite the time and resources required, compiling a set of curated autofixes is often worth the effort since the resulting fixes can be trusted to address the vulnerabilities without creating additional issues. It is also worth noting that there is a strong use case for GenAI to reduce the workload of generating and testing curated autofixes.

AI, whether embedded in security testing software, or as part of developers’ user experience, is here to stay. The potential time and resource savings are enormous. Decision-makers today need to consider the potential risks inherent in some of the newer, cost-saving technologies, and balance those against the adoption of curated tools and approaches that they can trust.

Visit HCL AppScan to learn more about the AI and Machine Learning capabilities available today in our application security testing solutions.

Comment wrap

Start a Conversation with Us

We’re here to help you find the right solutions and support you in achieving your business goals.

Secure DevOps | November 8, 2024
Protecting Software Supply Chains with SBOM & PBOM
Learn how SBOM and PBOM are transforming software supply chain security. Explore how these tools help organizations identify vulnerabilities, ensure compliance and mitigate risk from cyberattacks targeting third-party vendors and open-source components.
Secure DevOps | October 23, 2024
New Licensing Changes & MHS Launch in HCL AppScan Version 10.7.0
Learn about HCL AppScan Version 10.7.0 licensing changes, including the new My HCLSoftware portal for seamless license management and compliance.
Secure DevOps | October 23, 2024
HCL AppScan 10.7.0: AI-Driven Security & API Scanning Upgrades
Discover the new features of HCL AppScan 10.7.0, including AI-powered vulnerability detection, enhanced API scanning, and a modernized user interface for better security.