top of page

Nicolas Bontoux, SonarSource: Why False Positives Are Worse Than a Few False Negatives

Code security tools are programmed to raise false positives rather than miss any real vulnerabilities. The belief is that it’s better to take a broader approach that incorrectly identifies safe code as insecure (false positives) over a narrower approach that wrongly identifies unsafe code as secure (false negatives). As a result, a typical tool will often flag hundreds of supposed security flaws in a codebase – with only a dozen or so of these instances containing actual errors that require changes to code.


We sat down with Nicolas Bontoux, VP of Product Marketing, SonarSource to dive deeper into false positives/negatives and how they affect application security in particular.

Can you define and explain the difference between ‘False positives’ and ‘False negatives’ in the context of code security?

In the context of Code Security, just like in the general context of data classification, a false positive represents the incorrect diagnosis that there is a problem, when actually everything is fine (in this case, an incorrect claim that a piece of code is making the software vulnerable). A false negative on the other hand is the (still incorrect) diagnosis of everything being fine when there actually is a problem.

With false positives, the analyzer is ‘crying wolf’: raising an alarm (e.g. the sheep are being attacked), while everything’s fine. On the other hand, with false negatives, a wolf is attacking the sheep, but the alarm remains silent.

What impact does a False Positive have on a developer’s daily workflow?

Just like you’d go verify things are fine whenever hearing an alarm, developers will first investigate the issue (they do not know yet that it is a false positive!). It is only after investigating and possibly reviewing with peers that they could silence the warning as a false positive. Their workflow (and possibly that of their colleagues) was interrupted, they lost time, and their trust in the tool (which cried wolf) is eroded.

With conventional code security tools, how many False Positives are usually reported?

The approach of conventional code security is to raise an issue for anything even remotely suspicious (“better to be safe than sorry”). This generates verbose reports of up to hundreds of issues, most of which would be false positives. Dedicated security teams would take it as part of their responsibility to review these security reports, distinguish actual concerns from clear cut false positives, and only engage dev teams in investigating and fixing legit issues.

Why does ‘Shift Left’ in Code Security imply a radical change to this conventional approach?

“Shift Left” is about addressing security concerns as early in the software development process as possible. Instead of security teams asking developers to fix code written days (or months!) ago, Shift Left advocates that developers receive immediate feedback and guidance about the security of their code. In that context, developers expect precise and actionable feedback. Overblown reports from conventional tools (that are known to include false positives in their results) will therefore be of no value to them.

Why do False Positives hurt Application Security?

False positives represent a threat to development teams' shifting Code Security left, and can limit their willingness to contribute to better software security. Indeed, after being confronted with a handful of false positives, developers will lose trust in the tooling altogether (too much time lost, too much context switching, too little gain), and will find their way around it. The end result is that nothing gets fixed, not even the few real issues that were hidden in a mix of false positives.

Why does this make False Positives worse than a few False Negatives?

See it practically: imagine 12 real vulnerabilities reported in a sea of 100 false positives. Here the developers will give up and ignore the feedback altogether because it’s too much noise. Now imagine that there is no false positive at all, but only 10 of those real vulnerabilities are reported. The developer will be engaged on high-quality accurate feedback, will trust the tool, and will adhere to this new security-oriented practice and tooling.

In the former option, false positives have killed your efforts to shift security left. In the latter option, developers fully engage despite a few false negatives (which can still get detected by a future version of the product, as the vendor keeps striking and improving its balance between false positives and false negatives).

What can developers do to minimize False Positives?

The responsibility of minimizing false positives really is on code security tooling vendors more than the developers using these tools. There are however general practices that can help support and promote this approach:

  • Do not judge code security tools only by the number of issues they raise. This is a common pitfall, and can lead to great deception with tools seemingly giving plenty of results, but actually having a low precision (most results being noise or false positives).

  • Share your corner cases with the open community: false positives are easier to fix when they're reported! Reporting code that systematically misleads security tooling is a good way to get the tools improved.

  • Openly discuss the topic with your dev teams and tooling vendors: appreciation and understanding of the impact of false positives is already a great first step in building efficient workflow for code security.


###

Comments


bottom of page