Generative AI: A double-edged sword for application security 

IDC predicts that by 2026, 40% of net-new applications will be intelligent, incorporating AI to enhance user experiences and create novel use cases.

Sheldon Mills  profile picture
Sheldon Mills

October 25, 20243 min read

Generative AI (GenAI) burst into the public consciousness in late 2022. By April 2024, 17% of organizations had already introduced GenAI applications into production, with another 38% making significant investments.  

On the one hand, GenAI brings unprecedented opportunities to strengthen and innovate cybersecurity, but on the other hand, it also introduces new risks that require cutting-edge solutions. 

IDC predicts that by 2026, 40% of net-new applications will be intelligent, incorporating AI to enhance user experiences and create novel use cases.

Want a quick overview of the challenges and solutions? Check out our infographic: The Peril and Promise of Generative AI in Application Security.

Desire for efficiency vs increased risk 

There is a growing desire across industries to harness AI to improve organizational efficiency and productivity. GenAI can improve cybersecurity processes, such as automated threat detection, code review, and security testing. However, the same technology presents unique security challenges that traditional methods struggle to address.  

GenAI models operate as black boxes and exhibit highly dynamic behavior. Traditional security tools often rely on understanding the application’s logic to detect anomalies or vulnerabilities, which is challenging with opaque AI models. 

GenAI applications have both a supply chain to be secured and distinct vulnerabilities.  Due to their reliance on large data sources, pretrained models, library’s and components which are often untraceable, organizations need to adopt a new paradigm to mitigate the risks introduced by AI-powered systems. Compromised data sets, model manipulation, and backdoor attacks through open-source components are just a few examples of vulnerabilities common to Generative AI. 

As businesses integrate AI deeper into their operations, they inadvertently expose themselves to new and evolving cyberthreats. With their heavy reliance on large, often sensitive data sets for training, GenAI applications will become prime targets for data breaches. Cyber attackers are exploiting vulnerabilities specific to AI models, such as data poisoning and adversarial attacks, making it clear that AI is both a tool for defense and a target for exploitation. 

For a more detailed analysis of the security challenges unique to Generative AI, check out the full IDC position paper here.

Securing AI while leveraging its power 

Organizations should implement security tools specifically designed to tackle the unique vulnerabilities of GenAI applications. These tools need to identify code patterns that allow malicious inputs to exploit AI model’s behaviors and must also be capable of recognizing and understanding AI and ML libraries and frameworks like TensorFlow and PyTorch. Furthermore, compliance with AI-related industry standards and regulations, providing audit trails and documentation, is crucial.  

Want to dive deeper into how GenAI is reshaping application security? 

Download the full IDC position paper or watch our on-demand webinar with Research Manager Katie Norton where you’ll learn exactly how to protect your organization from the unique vulnerabilities posed by GenAI! 

Share this post

Share this post to x. Share to linkedin. Mail to
Sheldon Mills avatar image

Sheldon Mills

Sheldon Mills is a Senior Product Marketing Manager with Fortify for OpenText cybersecurity. Whether it’s Application Security by day, or co-hosting his podcast on habit building by night, he has a passion for helping people solve problems and get from where they are now, to where they want go.

See all posts

More from the author

Turn SAST Findings into Actionable Learning 

Turn SAST Findings into Actionable Learning 

OpenText’s partnership with Secure Code Warrior empowers developers to take ownership of application security, transforming vulnerabilities into opportunities for growth and innovation. 

December 10, 2024

3 min read

OpenText recognized as a 2024 Customers’ Choice for Application Security Testing on Gartner ® Peer Insights™︎

OpenText recognized as a 2024 Customers’ Choice for Application Security Testing on Gartner ® Peer Insights™︎

We are excited to announce that OpenText™︎ has been recognized as a Customers’ Choice vendor for 2024 in the Application Security Testing category on Gartner®…

November 25, 2024

3 min read

Manage your AppSec data through a single pane of glass with Fortify Insight

Manage your AppSec data through a single pane of glass with Fortify Insight

Enterprises still struggle to answer fundamental questions: How many critical and high application vulnerabilities do we have? What are the top 3 to 10 categories…

October 10, 2023

2 min read

Stay in the loop!

Get our most popular content delivered monthly to your inbox.