Code security tools are programmed to raise false positives rather than miss any real vulnerabilities. The belief is that it’s better to take a broader approach that incorrectly identifies safe code as insecure (false positives) over a narrower approach that wrongly identifies unsafe code as secure (false negatives). As a result, a typical tool will often flag hundreds of supposed security flaws in a codebase – with only a dozen or so of these instances containing actual errors that require changes to code.
We sat down with Nicolas Bontoux, VP of Product Marketing, SonarSource to dive deeper into false positives/negatives and how they affect application security in particular.
Can you define and explain the difference between ‘False positives’ and ‘False negatives’ in the context of code security?
In the context of Code Security, just like in the general context of data classification, a false positive represents the incorrect diagnosis that there is a problem, when actually everything is fine (in this case, an incorrect claim that a piece of code is making the software vulnerable). A false negative on the other hand is the (still incorrect) diagnosis of everything being fine when there actually is a problem.
With false positives, the analyzer is ‘crying wolf’: raising an alarm (e.g. the sheep are being attacked), while everything’s fine. On the other hand, with false negatives, a wolf is attacking the sheep, but the alarm remains silent.
What impact does a False Positive have on a developer’s daily workflow?