top of page

The end of cybersecurity as we know it? How AI is rewriting the rules of the game

At this year’s AuditBoard user conference in San Diego, former Cybersecurity and Infrastructure Security Agency (CISA) director Jen Easterly painted a provocative picture of the future of digital defense: one in which the traditional cybersecurity industry may cease to exist—not because cyber attacks disappear, but because they can be prevented at a fundamental, structural level. The shift? A full-scale reckoning with the fact that bad software has been the root cause, and that artificial intelligence (AI) may be the lever that flips the entire paradigm.


A massive attack surface and decades of insecure code


Easterly reminded the audience that the cyber threat landscape has never stopped evolving. The proliferation of data, platforms and devices means “we’ve expanded the attack surface for cyber threat actors like China and Russia and Iran and North Korea and gangs of cybercriminals.” At one point she noted, if cyber-crime were a country, it would rank just behind the US and China.She argued that the focus on breach-response and network defense misses the bigger structural issue: the software itself. “We don’t have a cybersecurity problem. We have a software quality problem,” she said, attributing fault to vendors who prioritize speed to market and cost-reduction instead of safety.


This framing is consistent with recent research showing that many exploited vulnerabilities today trace back to well-known categories: SQL injection, directory traversal, memory-unsafe coding. Nearly two decades after MITRE Corporation codified such classes of flaw, they remain in shipped software. Easterly dismissed the idea of exotic cyber weapons, pointing instead to routers and network devices riddled with flaws as the basis for even major state-sponsored campaigns.


AI’s dual role: attacker’s amplifier, defender’s hope


Easterly emphasized that AI is changing the offense–defense equation. On the attack side, tools powered by AI are enabling “stealthier malware” and “hyper-personalized phishing,” as adversaries increasingly automate reconnaissance, vulnerability discovery and attack-chain generation. Recent reporting from Axios echoes this: firms are already seeing AI-assisted attacks that can move faster and with greater stealth than before.


On the defense side, Easterly argued there is a real opportunity: “If we’re able to build and deploy and govern these incredibly powerful technologies in a secure way, I believe it will lead to the end of cybersecurity.” By that she meant a world where security breaches become outliers—and not an inevitable cost of doing business. She pointed to the agency’s AI Action Plan to identify vulnerabilities, ensure software is secure by design, detect and respond to intrusions, and learn from attacks.


The industry pivot: move fast, secure always


The crux of Easterly’s call is a transformation in vendor incentives and engineering culture. Security must be baked in from day-one—secure by design—not retrofitted. She warned that when vendors shift risk to the customer and convince regulators that the status quo is acceptable, the result is what we see today: a “rickety mess of overly patched, flawed infrastructure.”


One of her most telling statements: “That’s where the risk gets introduced, and that’s where we have the power … to be able to drive down that risk in a very material way.” Her message to enterprises: demand more of your software suppliers. Hold them accountable through contracts, governance and frameworks that enforce continuous compliance.


Human vs Machine: the missing piece


While Easterly’s vision is ambitious, it is not without skeptics. As Mick Leach, Field CISO at Abnormal AI, put it:


“Of course, it’s hard to disagree with Jen Easterly that AI can help vendors identify and remediate software vulnerabilities more efficiently and improve overall product quality. But, what I think these comments disregard is the fact that better code alone won’t solve the biggest cybersecurity problem: people.
Today’s attackers now go after human behavior and procedural gaps through sophisticated social engineering rather than directly exploiting technology.
AI excels in pattern recognition … However … where it can struggle is in detecting vulnerabilities that have never been exploited before. AI can’t predict new attack methods, and this limitation is already being seen … An overreliance on AI will most likely create a false sense of security … organisations must hold vendors accountable … Yes, AI is a powerful tool, but cybersecurity will only improve when it’s paired with human vigilance, robust governance, and accountability at every level.”

Leach’s caveat underscores that while AI may lift the technical burden, humans and process-governance remain central to resilient security.


What this means for CISOs and boards


  1. Shift budgets from detection/response toward software quality and design: Instead of allocating most of your budget to SOC tools and reactive response, invest in secure-by-design engineering practices, SBOMs, and software-vendor accountability.


  2. Reframe vendor contracts: Embed continuous security obligations, vulnerability-disclosure requirements, and remediation timelines in vendor SLAs.


  3. Govern AI usage and deployment: If you use AI to defend, ensure you understand the model, evaluate bias, adversarial risk, and how it integrates with human teams.


  4. Educate the human layer: Social engineering remains the path of least resistance for attackers. Training, simulation, awareness and process enforcement are still essential.


  5. Accelerate identity and access modernization: Easterly previously flagged identity as the “security problem” in an AI-fused world—systems must evolve from static gatekeepers to dynamic, behavioral, context-driven agents.


A bold prediction: cybersecurity becomes anomalous


If Easterly’s thesis plays out, the mid-term future of enterprise security might look very different: rather than assuming breaches are inevitable, organizations will aim for “normal operation without breach”—so that when a breach occurs, it is headline-worthy. The industry will no longer be defined by endless alerts, breach notifications and incident-drill fatigue. Instead it will evolve into resilience-engineering: secure platforms, human-in-the-loop, AI-augmented, flaw-free software stacks.


This doesn’t mean cyber-threats vanish—they will still exist—but that engineering and governance shift to a point where the breach becomes the exception, not the rule.


Bottom line


Jen Easterly’s message is audacious, disruptive and absolutely one to watch. AI is not just a weapon for attackers—it may ultimately serve as the scalpel that cuts out decades of insecure design and sloppy software. But—as experts like Mick Leach emphasize—the tool alone will not suffuse the change. Human behavior, processes, vendor incentives and governance must evolve in tandem.


In the tug-of-war between attackers and defenders, the equation is shifting: if we get the software right and the machines right, maybe one day we won’t talk about “cybersecurity” at all—but simply about “software quality”. When that day arrives, breaches will be surprising, not expected.

 
 
bottom of page