Introduction
This chapter explores the challenges and best practices concerning the role of Artificial Intelligence (AI) in Application Security (AppSec), product security, and the broader scope of software security. It delves into the increasing prevalence of AI in today's cybersecurity landscape.
Although AI is a well-established concept within technology, its relevance has risen dramatically over the past three years due to advancements in Large Language Models (LLMs) and precision AI stemmed by a significant reduction in compute costs over the past three years. These developments are fundamentally transforming the landscape of AppSec and product security, offering new perspectives and strategies.
AI Is Transforming Cybersecurity
Before exploring the role AI will play in AppSec and software development, it's crucial to understand how AI is currently changing the cybersecurity landscape. There are three primary ways in which AI is transforming cybersecurity:
- AI for Security
This first category of products are AI technologies being leveraged to enhance the workflow for existing Security Operations Centers (SOCs), augment security analytics capabilities, and automate many of the tasks faced by the ever-growing alert fatigue subsumed on security teams. In this area, we are also seeing AI being used to improve threat detection and incident response times. - Security for AI
The technologies in this category are used to protect and enforce guardrails on AI systems, particularly those built around Large Language Models (LLMs). These AI-driven applications that cut across various business functions such as marketing and customer service need to be secured against both insider attacks and external breaches. This is an area where organizations need to develop robust cybersecurity measures focused on securing the data used to train these models to prevent data poisoning, model bias, prompt engineering hacks, and ensuring the ethical deployment of AI technologies. - AI Security for Application Security
AI technologies in this category are employed throughout the software-driven development cycle, from code to deployment. Solutions here ensure that AI-generated code is secure from the start and are continually tested for errors—which could be both developer and machine-generated—to protect against vulnerabilities in runtime. The rest of this chapter focuses on this area.
The Role of AI In Application Security (AppSec)
AI is not only changing software development practices but also the pace at which AppSec needs to evolve to keep up with these changes. To understand how AI is going to change AppSec, we first need to understand the role that AI will play in software development.
AI-Driven Software Development
The fundamental transformation AI brings to application development primarily centers on the introduction of AI copilots and coding assistants equipped with sophisticated coding capabilities. An AI copilot serves as a specialized tool that aids software developers by offering code suggestions, assisting with debugging, and even generating code snippets. Utilizing machine learning algorithms, these tools grasp the context of the ongoing coding project and provide intelligent code completion suggestions that enhance both productivity and the quality of the code. Developers are increasingly embracing these AI-driven tools as collaborative partners in programming, which has led to a significant increase in the use of auto-generated code in contemporary applications. The adoption rates are projected to grow as these LLMs enhance their capabilities. Currently, 5% of code written today is generated by AI. This number is forecasted to grow to 25% in 2024 and even 50% by 2025 as coding assistants become mainstream further leading to significant productivity gains and more companies wanting to adopt these technologies.
Challenges Within Application Security Due To AI
As the rapid proliferation of auto-generated code introduces more AI-driven applications and products, application security is going to face a number of challenges. Some of them include:
- Malicious Code
There's a risk of malicious code being ingested into applications during the development lifecycle if coding assistants are compromised. This could result from poisoned code by insiders or compromised third-party software packages or libraries. - Data Integrity Issues
Another major challenge will be around the quality and quantity of training data used to build these models. Data integrity, data leakage, and poisoning are significant concerns. These risks stem from the data used to train AI models, which could be corrupted, leading to flawed outputs or sensitive data exposure, especially through third-party interactions. Organizations will face challenges in navigating and solving these data concerns. The goal is to avoid poor outcomes and manage the risk of reduced oversight, which could lead to missed issues or overly cumbersome validation processes. Organizations are going to need to ensure they have humans in the loop to manage the outputs from the models. - Preventing “Runaway” AI
The metaphor of a runaway truck illustrates the danger of AI systems operating beyond their intended scope without adequate safeguards, potentially leading to catastrophic outcomes. Companies need to ensure they apply the principles of safe, secure, and trustworthy development to prevent Runaway AI issues. Ensuring AI systems operate within their intended parameters without causing unintended harm is crucial. This involves mechanisms to prevent AI systems from going rogue, safeguarding the data they use, and ensuring their operations are transparent and under human oversight.
Categories of Application Security Solutions
Application security teams must leverage AI technologies to secure and mitigate many of the risks mentioned above. We believe there are three ways that AI is significantly reshaping the AppSec tooling ecosystem:
- Code Scanning, Testing & Detection Tools
As AI becomes adept at generating code, there is a need for software security testing and scanning technologies like Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to evolve. It is expected that AI is meant to enhance existing code scanning capabilities to identify AI-generated code, including code weaknesses, and secrets. - Response Tools
AI will aid in correlating and prioritizing security issues and findings across tools with advanced AI-powered Application Security Posture Management (ASPM) solutions. It is already helping organizations focus on high-ROI team activities when investigating vulnerabilities and enabling efficient remediation strategies. For example, we can find ways to correlate security findings across DAST and SAST tools to speed up mean-time-to-remediation (MTTR). - Remediation Tools
There are two major components of AI-Based remediation solutions. Based on the context of the findings from either the scanning or detection tools, AI will support the creation of targeted educational content for developers looking to fix a security vulnerability. The second piece is the auto-remediation of findings to create code fixes and remediation steps within the security tools themselves. For example, how to quickly incorporate rolling out a patch in a cloud workload when a misconfiguration has been identified, etc. Remediation tools will enable the industry to be more proactive in finding weaknesses in applications.
The Practitioner Perspectives: The Future of Application And Product Security
The integration of AI into application security is reshaping the landscape of cybersecurity, demanding new strategies and innovations from security leaders and practitioners. As we look to the future, several key areas will define the evolution of application and product security. These are a number of practical guidelines for AppSec leaders:
- Human-AI Collaboration in AppSec
AppSec Leaders need to ensure their developers can work side by side with AI in securing applications. It’s essential to realize that these AI-machines do make mistakes and these LLMs are not perfect. The rapid evolution of AI tools does not replace the need for human expertise in AppSec; instead, it shifts the focus away from mundane tasks to much higher-value activities. While AI can provide significant assistance, a "human in the loop" is essential to oversee AI's work, output, and correct its errors, ensuring a balanced approach to security. - Security Governance for AI
With AI's expanding role in applications, there is a critical need for comprehensive security governance frameworks that can keep pace with AI risks. These frameworks should provide clear guidelines for the ethical use of AI, ensure compliance with evolving regulatory requirements, and establish standards for data protection and privacy. Security governance must also address the unique challenges posed by AI, such as the potential for data poisoning, model bias, and the emergence of new attack vectors that exploit current weaknesses within these AI systems. - Measure Internal Processes & Collaborate with Vendors
The hype around AI is expected to transition into more practical applications in 2024, as organizations seek to address real bottlenecks in the software development life cycle. Organizations should assess whether AI provides a substantial improvement over existing methods before the implementation of any AI solution internally. Organizations should be open to collaboration with vendors who are already exploring these technologies, as it can be more beneficial than trying to develop solutions in-house without sufficient expertise. - Redesigning AI AppSec Teams
Organizations need to rethink the current structure of their organizations. Future AI & AppSec teams must evolve to integrate AI specialists, including data scientists, AI security experts, and ethical AI advisors. These teams need to be more modular and agile, capable of rapidly adapting to new threats and technologies. This redesign will likely include the development of new roles and responsibilities focused on overseeing AI-driven security processes, ensuring that AI systems are not only effective but also operate within ethical and regulatory boundaries. Modular AI-driven security teams will enable organizations to deploy rapid updates and adjust in response to emerging threats, ensuring that security measures are as dynamic and adaptable as the AI technologies they aim to secure. - Develop an AI-Integrated Red Team Framework
As organizations embed AI into their AppSec programs, they must also integrate sophisticated red teaming practices. This strategy not only aligns with the strategic mandates outlined by the National Institute of Standards and Technology (NIST) but also adheres to the Executive Orders (EO) issued by the White House. Leaders should start by establishing a framework that guides their attack simulations as well as how the findings from these simulations will be documented and fed back into continuous improvement of security practices. It is essential for companies to conduct regular red teaming exercises, which are crucial for staying abreast of emerging attack vectors and implement tools that can dynamically adapt their attack strategies in real time. - Collaborative Security Efforts
The complexity of AI-driven applications necessitates collaborative efforts that bring together developers, security teams, regulatory bodies, and industry groups. These collaborations are needed to help standardize security practices across the organization. Collaborative efforts are particularly important in countering sophisticated AI cyber threats such as deepfakes, automated phishing attacks, and AI-driven malware. - Anticipating Future Challenges
Future application security solutions will need to not only address traditional security concerns but also anticipate the unique challenges posed by AI. This will include developing AI-specific threat detection systems, creating AI-driven security protocols, and employing AI to simulate potential security scenarios to identify vulnerabilities before they can be exploited by attackers. By enhancing AI's security capabilities, organizations can turn AI from a potential liability into a powerful asset in their cybersecurity arsenal. - Lead AI transformation from a security first mindset
Do you remember how digital first companies like Netflix and Amazon transformed the way business is done? These companies redefined how an enterprise is built using a "digital first" mindset to redesign every possible workflow that was non-digital earlier. Similarly there will be AI First companies that will transform the way business is done. It could be hiring, training, writing emails, brainstorming, writing code, testing code, reviewing code, securing software, writing documentation, sales, marketing, and customer success etc. That is, AI is infused at the core of every possible workflow. These companies are going to transform businesses in a way Netflix did to Blockbuster and Amazon did to Barnes and Nobles and every other type of retailer and data center provider. Thanks to the accelerating power of AI, they are going to bring the same type of fatal transformation but much faster than what it took Netflix to put Blockbuster out of business. So, the companies that will eventually transform successfully into an AI first company will have a competitive advantage and the security leaders that adapt to this change, will play a key role in making sure this transformation is not only secure but also one that will stand the test of time.
Summary
AI is here to stay within software development and application security. The world is not going back to previous legacy solutions. Leaders must embrace this new change. The future of application and product security is intrinsically tied to the integration and management of AI technologies. As software development technologies such as code development continue to evolve rapidly, so too must application security technologies catch up. Enterprises should consider an ASPM (Application Security Posture Management) and AppSec platform that can support both traditional and AI centric development environments. These solutions should strongly provide a neutral governance layer that can operate across multiple tool sets.
This evolving landscape requires a concerted effort from the cybersecurity community to develop thoughtful, effective security solutions that address the unique challenges of AI. Through collaboration, innovation, and adherence to ethical standards, the integration of AI into application security will define the next frontier of cybersecurity.