Artificial intelligence (AI) is all anyone can talk about. With the advent of Generative AI (GenAI) and large language models (LLM), the capabilities of AI extend far beyond automation and data analysis, practically requiring a Ph.D. in data science to permeate all aspects of technology, business and daily life.
GenAI is seen as a game-changer for software developers trying to write ever-increasing amounts of code in shorter and shorter timeframes. GitHub Copilot, Amazon Code Whisperer, Tabnine, and similar tools can now create AI-generated code from simple prompts, and save developers immense amounts of time. These tools excel at expediting common coding tasks but can introduce security vulnerabilities that might have been present in the training data used for the LLMs.
Defensive and Offensive AI for Application Security
GenAI is unquestionably a powerful tool for increased productivity, no matter how it’s used. In the realm of application security, use cases can often be seen as either “Defensive AI” or “Offensive AI”.
In the first case, Defensive AI is maturing in its ability to sort through AppSec test findings and determine which results are the most “interesting” and require the attention of a human. By filtering out results that are deemed less critical from a security perspective, these tools can save development teams a great deal of time.
Offensive AI is the use of GenAI to orchestrate attacks on running software applications in order to find security flaws that could enable a data breach. There’s a lot of concern over the use of offensive AI by bad actors, including state sponsored hackers for a wide variety of attack vectors, such as phishing attacks using AI-generated content. As the saying goes, the best defense is a good offense. So the same AI tools are being used by security researchers to learn more about GenAI capabilities in order to combat attacks more effectively.
Remediation in Application Security
Finding vulnerabilities is a crucial step in maintaining robust application security, but identifying issues is only the beginning—fixing them is equally important. This is where remediation comes in, relevant not only for security vulnerabilities but also for conventional bugs and coding conventions (linting). There are three primary approaches to remediation.
- Traditional Remediation: This education-based approach involves well-written articles, training videos, blogs with insights, and guidance for developers about different types of issues and examples on how to fix them across various programming languages and application frameworks.
- Auto-remediation: In this approach, the AppSec software being used provides the developers or security teams with hand-crafted/curated/pre-written automatic fixes to be used for specific types of issues. In each case, specific languages are provided, and the complexity of the fixes can vary greatly.
- Auto-remediation with GenAI: This approach involves on-demand remediation of a code fix by GenAI using an LLM. It’s commonly wrapped in an API that employs clever prompt engineering and frequently requires sending your code snippets to the GenAI platform.
Here in part one of our two-part blog post, we explore the education-based approach found in traditional remediation. In part two, you can learn more about auto-remediation using both hand-crafted/curated/pre-written automatic fixes and GenAI-driven automatic fixes.
Traditional Remediation and Education
By educating developers through comprehensive resources, they not only learn to identify and understand the problems but also grasp the nuances of the solutions. This depth of understanding is crucial, as it equips developers to avoid repeating the same mistakes in the future. Such educational content helps embed security into the developer's skill set, fostering a proactive rather than reactive approach to security issues.
Well-written articles that detail various types of issues and provide examples of how to fix them across different programming languages are arguably one of the best methods for long-term success in application security. Full articles or summaries are sometimes provided as part of security software user interface (UI) so that developers can get context for an issue when it is brought to their attention.
Additionally, scalable and comprehensive training modules can help developers to continuously improve their skills and adapt to new security challenges. By making security an inseparable aspect of good coding practice, articles and educational resources contribute to a more robust and resilient development culture.
Just as seasoned developers are expected to produce high-quality, efficient and scalable code, they should also be expected to ensure their code is secure. Education is the most effective way to achieve this across the skills of the multiple members of the development organization.
The pros and cons of hand-crafted/curated/pre-written automatic fixes and GenAI-driven automatic fixes, along with a summary of comparison of all three approaches are discussed in depth in part two of this series.
Visit HCL AppScan to learn more about the AI and Machine Learning capabilities available today in our application security testing solutions.
Start a Conversation with Us
We’re here to help you find the right solutions and support you in achieving your business goals.