Jon David of NR Labs: Why Passing the Cybersecurity Pentest Can Still Get You Breached
- Cyber Jack
- 19 minutes ago
- 6 min read
We sat down with Jon David of NR Labs to challenge one of security’s most entrenched assumptions: that passing a pentest means you’re safer. Drawing on years of frontline breach response, David explains why traditional, vulnerability-centric testing no longer reflects how modern attackers actually operate, and why resilience today depends on understanding attack paths, identity abuse, and architectural weak points. David explores how organizations can move beyond compliance theater toward remediation that measurably reduces real-world risk.

Many organizations still treat pentesting as a compliance checkbox. From your experience responding to real-world breaches, where does traditional pentesting most often fail to reduce actual risk?
Current pentesting fails because it was built for 2010, not 2026. AI-driven attackers are operating at machine speed. They leverage automation and identity abuse to move faster than defenders can react. At the same time, modern environments are fragmented across cloud platforms, SaaS, third parties and identity providers, creating invisible attack paths that don’t show up when you test systems in isolation.
After responding to some of the most complex breaches of the last decade, I have seen the same pattern repeat: organizations pass their pentests and still get breached – often within months – because the test never reflected how attackers actually move through modern, identity-centric architectures.
Meanwhile, boards no longer want compliance artifacts, but rather proof of resilience: Can our organization withstand a real attack? Traditional pentesting can’t necessarily answer that, because it produces reports, not reduced risk.
Most pentesting fails CISOs in three main ways:
They find vulnerabilities, not attack paths
They test components, not architecture
They generate activity instead of assurance
This is the core insight behind what we internally call Architectural Pen Testing; if you don’t test the architecture the way attackers exploit it, you’re measuring the wrong thing.
The result is a familiar pattern – hundreds of issues ranked by severity, but none of which explain how an attacker would get to something that matters to the business. The security team spends all their time chasing individual bugs while the real breach paths remain wide open. All they need to do is exploit one or two structural weaknesses to get leverage. Relying on traditional pentesting means you end up with a plethora of findings, but no assurance that your work is making it harder for attackers to get in.
You’ve argued that finding vulnerabilities is only half the problem. What does “actionable remediation” look like in practice, and why do so many security programs struggle to get there?
Finding vulnerabilities is easy. Turning that insight into change is the hard part. Actionable remediation means 1. starting with how a real adversary would move through an environment, 2. identifying the few chains that lead to real impact and 3. breaking those chains.
This mindset comes directly from breach response experience. When you’ve watched attackers chain together identity abuse, misconfigurations, and trust boundary failures, you understand why patching in severity order does not reduce risk.
In practice, that looks like:
Mapping real attacker journeys rather than abstract weaknesses
Identifying the paths that give attackers leverage
Sequencing fixes based on impact, not severity score
Taking defensive insight and turning it into actions engineers can implement
Verifying that the fix actually removes the attack path
Most teams struggle to switch their mindset because they’re inundated with the noise without real context e.g. how this flaw fits into a kill chain, what it enables, why it matters now. We’re patching the symptoms without being confident that if we fix this, we have measurably reduced the chance of a breach. Resilience is structural. If the architecture holds, attacks fail. If it doesn't, controls don't matter.
After years dealing with high-impact incidents, how do you distinguish between vulnerabilities that look severe on paper and those that truly matter to attackers in the wild?
Look, CVSS scores are useful, but they’re not the whole story – we’ve learned that the hard way over decades of responding to the ugliest breaches out there. A 10.0 critical vuln sitting in some isolated corner of the network might never get touched, while a “medium” misconfiguration that lets someone jump from a compromised SaaS app straight into domain admin can end the game in minutes.
One lesson I learned early in incident response is that attackers don’t hunt for vulnerabilities – they hunt for leverage. Can this flaw get me initial access? Can it help me escalate privileges? Can it let me move laterally or maintain persistence? We’ve seen it time and again – especially in stuff like the Scattered Spider retail attacks last summer. A single identity or boundary weakness often matters way more than a flashy remote code execution bug that’s air-gapped from anything valuable.
So the quick gut-check I recommend is pretty straightforward:
Does this thing sit on a path attackers are already walking (identity abuse, cloud pivots, automation hooks)?
Does it bridge trust zones or connect low-value entry points to crown-jewel systems?
Is it the kind of thing we’ve actually seen chained together in live incidents – AI-assisted phishing, prompt-injection tricks in Bedrock environments, that sort of thing?
Severity scores grab headlines, but real risk lives in context. Shift your lens from “how bad is this bug?” to “what does this bug unlock?” and suddenly the noise starts making sense.
Security teams are overwhelmed by alerts, findings, and limited resources. How should organizations realistically prioritize remediation efforts to meaningfully lower their risk profile?
The honest answer: stop trying to fix everything. You’ll burn out your team and still leave the door open.
What actually moves the needle is looking at the environment through an attacker’s eyes – mapping the realistic ways someone could get in, spread, take control, and do damage – then ruthlessly focusing on the handful of fixes that break those paths.
This is where breach-informed testing changes the conversation from vulnerability management to architectural risk reduction.In practice I tell clients to do a few things right away:
Forget the endless vuln list for a minute. Instead, sketch out the most likely attack journeys you’re seeing in the wild right now (phishing → identity compromise → cloud lateral movement is still king for a reason).
Zero in on the big leverage points: identity hygiene, privilege chains, boundary crossings between cloud/on-prem/SaaS, redundant paths attackers love for persistence.
Rank fixes by business impact first, CVSS second. If closing one gap stops ransomware from reaching file shares or payment systems, that’s worth ten low-risk patches.
Pull in fresh threat patterns – AI-automated malware making a comeback, retail-specific blind spots we saw with Scattered Spider, the shortcuts startups take that blow up 78% of the time when they rush to market.
We’ve watched teams go from drowning in thousands of findings to sleeping better after they started asking one question: “If I fix this today, does it actually make a breach meaningfully harder?” When the answer is yes, you’re spending resources where they count.
As attack techniques evolve and boards demand clearer ROI from security spending, how do you see remediation-focused pentesting changing the role of offensive security over the next few years?
It’s already shifting fast. The old model (run scans, dump a giant report, call it a year) is dying because boards aren’t buying “we found 400 things” anymore. They want to know: “Are we harder to breach this quarter than last?”
Remediation-focused pentesting flips the script. Instead of proving how broken everything is, the job becomes proving the architecture holds—showing which attack paths are closed, which ones still exist, and exactly what needs to happen next to shrink the attack surface.
This is the future I've been pushing toward: offensive security as decision support for resilience, not a compliance artifact.
Over the next couple years I expect offensive security to look more like this:
Less “find every bug,” more “validate resilience against the threats that matter right now.”
Teams acting as strategic decision support, helping CISOs answer “Should we spend here or there?” with hard evidence from simulated attacks (AI-driven identity abuse, automated malware waves, etc.).
Metrics boards actually understand: fewer viable paths to crown jewels, faster time-to-disrupt attacker progression, avoided incident costs.
Continuous, outcome-oriented testing instead of once-a-year theater, especially as AI keeps speeding up how attackers chain things together.
For places like retailers or startups we work with, this means offensive security stops being a cost center and starts being the proof point: “Here’s how we stopped the next Scattered Spider-style compromise before it started.”