Auto-remediation: the future of AppSec?

Understand the limits of auto-remediation in securing applications.

Frans van Buul profile picture
Frans van Buul

September 26, 20249 minute read

a close up of a woman wearing glasses. In the glasses and the woman's eye, you can see the reflection of a computer screen that looks vaguely like coding.

Organizations need to develop applications in a fast and agile way. Security is essential, but lengthy manual security reviews are an unacceptable bottleneck. Application security testing solutions like Fortify address this by automating the security review process. 

Once the security testing is automated, a second bottleneck emerges: Humans must still review and act on the test results. Source code must be changed to remediate the security problems. Ideally, this task should also be automated. This idea is called auto-remediation and is currently a hot topic in the AppSec industry.  

Fortify has an auto-remediation solution. The Fortify Security Assistant plugin for IntelliJ performs highly reliable auto-remediation for 13 impactful vulnerability categories and is available for all Fortify SAST customers. However, auto-remediation is not a simple panacea and has many potential pitfalls even with current state-of-the-art technology. In this blog, we explain why. 

The specific case of SQL injection 

SQL injection is arguably the single most famous AppSec vulnerability category and has been at the center of attention since the beginning of the century. SQL injection flaws are typically caused by a SQL statement being created by concatenating fixed parts and user input. The best way to prevent SQL injection is to replace the concatenation with a “prepared statement.”  The user input will then be handled as parameters, making SQL injection impossible. 

Fortify’s Security Assistant can do this rewriting, thus auto-remediating a SQL injection vulnerability. Other vendors in the auto-remediation space can also do this. If you search for their demo videos, it seems that everybody loves to demo this particular case!  

Why would that be? Is that just because of the category’s fame? No. There are a few other reasons why we all love to demo this one for auto-remediation:  

  • There’s consensus on the best practice remediation strategy (prepared statements). 
  • This remediation strategy almost always works. 
  • The remediation requires a non-trivial amount of editing, showing the value of auto-remediation. 
  • The remediation process can easily be expressed in a series of structural operations on the source code. It does not need any true AI. (Although it may be marketed like that, of course.) 

There’s nothing wrong with picking a nice demo case to illustrate what a tool can do. However, it raises the question of how representative this case is of vulnerabilities in general. 

There certainly are some categories that resemble SQL injection in the abovementioned aspects. For example, XML Entity Expansion caused by an insecurely configured XML parser needs to be fixed by setting the parser features. So again, this is a great case for auto-remediation. But let’s have a look at some other ones. 

When auto-remediation breaks down 

Let’s look at some vulnerability categories that are problematic for auto-remediation. 

Cross-Site Scripting 

Most modern web frameworks prevent Cross-Site Scripting (XSS) by default. Content is escaped during rendering; for example, “<script>” becomes “&lt;script;&rt;” which prevents XSS in an HTML context. XSS can still occur if the developer explicitly requests rendering without this escaping. Some frameworks are very clear about the associated danger. In React, the attribute needed to do this is called “dangerouslySetInnerHTML.” 

An AppSec tool can easily detect that this is being used, and an auto-remediation tool can easily change it back to a safe version. This will make for a nice demo, but is it valuable? 

Probably not. The developer typing “dangerouslySetInnerHTML” almost certainly had a reason for doing so. Somehow, the functionality of the web page requires that content containing HTML be rendered at that point. Is that a good idea? Is it safe? It all depends. A case like this requires careful review and, if there is a security problem, a specific solution. Blindly changing this to something an AppSec tool considers secure will probably just break the application. 

Weak Cryptography 

Many old cryptographic algorithms are no longer considered secure. For example, MD5 hashes are insecure and should probably be replaced by SHA-2 hashes, and AES should replace DES symmetric encryption. 

Since the algorithm to be used is usually a parameter of a generic crypto API, it is easy to detect weak cryptographic algorithms and auto-remediate the finding by changing them to a secure alternative. But again: Is this valuable? 

It would work in rare cases (e.g., an application encrypting ephemeral data for its own use, e.g., in a session cookie). Even then, the time saved relative to manual remediation is minimal, of course. 

In most cases, it’s pointless. Cryptography is normally used on persistent and/or shared data. Changing a crypto algorithm in one place without considering data migration or the effects on 3rd parties doesn’t work. An auto-remediation tool will never do the heavy lifting here. 

Hardcoded Secrets 

Secrets (keys, passwords) stored directly in source code are a common security problem. AppSec tools can detect these. Is auto-remediation useful for this case? 

Of course, it’s not difficult for an auto-remediation algorithm to remove the secret from the source code. But doing so alone will just break the application. It needs to be replaced by a secure way to obtain the secrets.  

Unfortunately, no single approach always works. Getting a secret from an environment variable (as suggested by the Twelve-Factor App) is great for a microservice running on Kubernetes, but it’s horrible advice if the secret is a password and the application is running on a PC. Many organizations have specific policies or systems for storing secrets; remediation should follow that. But the auto-remediation tool won’t know. 

There’s another problem: Even when we remove the secret from the source code, the original secret is compromised (and still present in the repository history). True remediation requires an additional effort: changing the secret on relevant systems. 

The value of auto-remediation for hardcoded secrets is extremely limited. 

Any many other ones… 

Above, we covered three cases in some detail, but there are many more categories where auto-remediation has little to offer. Anything that needs to be fixed by input validation is problematic, both because there are many different architectures to implement validation and because the correct allow-list will not be known by the tool. This becomes a problem when remediating issues like open redirect and path traversal.  

In reality, even the SQL injection is problematic. Most developers know about prepared statements. But they have certain limits. For example, the name of a SQL table can’t be a prepared statement parameter. If developers create a SQL statement by concatenation, it’s often because they are dealing with a case like this.  

How to shred the security backlog 

Auto-remediation is an important technology that helps practitioners quickly act on AppSec testing results. However, as demonstrated above, its scope, or the percentage of cases where it will truly work, is inherently limited. Therefore, it is not a silver bullet to eliminate the human bottleneck and shred the security backlog. 

What would an ideal tool to shred the security backlog look like? Let’s outline some key principles. 

Combine auditing and remediation 

AppSec tools always produce a certain amount of noise, a.k.a. “false positives.” These findings don’t require an associated remediation, so we should never automatically remediate all findings from an AppSec tool. First, we need to do auditing to determine whether the findings are correct. If we accelerate remediation but keep auditing as-is, we still have a major bottleneck in our process, so we’ll need to accelerate that as well. 

Note that the auditing and remediation tasks overlap greatly. For both, we need to obtain an in-depth understanding of what’s going on in the application beyond the data and standard descriptions produced by the AppSec tool. If we have done that work for auditing, it should be fed into the remediation process to avoid double work. 

So, for multiple reasons, it makes sense to consider auditing and remediation simultaneously.  

Support the developer 

We have seen that complete automation only works in a few cases. In most cases, we still need the developer. This means that our focus should be on making the developer’s work as easy as possible. 

Strong AppSec tools provide a lot of useful information, including vulnerability descriptions and flow diagrams, all linked to the source code, combined with (generic) remediation advice. Nevertheless, understanding this and drawing the right conclusions is much work, even for an expert. 

Using generative AI, we make this task much easier. Generative AI can consider the evidence gathered by an AppSec tool, together with the actual source code, and then annotate the tool findings with its analysis. Acting as a developer companion, it makes the tool findings much easier to digest. 

If there is an obvious fix to remediate the issue, the AI can add this to the finding. But if we’re dealing with one of those cases where it’s not obvious, the AI can still provide suggestions on performing remediation – something that’s hard to imagine when doing auto-remediation directly on the source code. 

Even in cases where auto-remediation is an option, keeping the developer engaged is a good thing. The active participation and ownership of the change will improve their skills and make it less likely that this issue will surface again. 

Enter Fortify Aviator 

We recently launched Fortify Aviator, a new tool for auditing and remediation that follows the principles just described. 

Powered by an advanced large language model (LLM), Fortify Aviator can truly reason about the findings produced by Fortify SAST. It first performs auditing, determining whether the issue must be fixed or not (suppressing the latter). It also adds a comment to the issue explaining its decision. If the issue must be fixed, remediation advice is also included in this comment. It is concrete and ready to copy-paste when possible; it is more directional when needed. 

The information added by Fortify Aviator spectacularly boosts developer productivity. And since the information is added to the issue itself, it is available everywhere: the Fortify web interface, IDEs, Audit Workbench, generated issue tracker tickets, generated reports, etc. Any developer workflow is supported. 

Auto-remediation is useful for certain cases, and we’ll continue to support it. However, we believe it’s the Fortify Aviator approach that will allow our customers to shred their entire backlog. 

Share this post

Share this post to x. Share to linkedin. Mail to
Frans van Buul avatar image

Frans van Buul

Based out of the Netherlands, Frans van Buul is senior product manager for Fortify SAST at OpenText. Before transitioning into product management two years ago, Frans held several other positions that provided him with the relevant background. He’s been a security consultant and auditor at PwC, a Java developer and architect at several companies, and a Fortify SAST sales engineer and sales engineering leader.

See all posts

Stay in the loop!

Get our most popular content delivered monthly to your inbox.

Sign up