AI Can Now Find Critical Software Vulnerabilities Faster Than Humans. That’s a Problem for Everyone
- Mar 22
- 3 min read
A new wave of AI-driven cybersecurity tools is reshaping how software vulnerabilities are discovered, validated, and exploited, compressing what once took months of expert human effort into hours.
According to new research from Theori, large language models can now scan millions of lines of code and identify high-impact security flaws in less than a day. The findings highlight a turning point for both defenders and attackers as AI accelerates the speed and scale of software vulnerability discovery.
The system, called Xint Code, combines multiple AI models with an orchestration layer designed to mimic how human security researchers think. Instead of simply flagging known insecure patterns, the system evaluates vulnerabilities based on real-world exploitability and potential business impact.
This shift matters because traditional code scanning tools often overwhelm security teams with false positives or low-risk issues. By contrast, the new approach focuses on identifying vulnerabilities that attackers can realistically exploit.
In testing, the system analyzed nearly one million lines of code in under 12 hours and surfaced dozens of high-severity vulnerabilities, with a relatively low false positive rate compared to conventional tools.
More striking is what the system uncovered.
During a scan of PostgreSQL, one of the most widely used open-source databases in the world, the AI identified a previously undiscovered vulnerability that had gone unnoticed for over two decades. The flaw involved improper handling of encryption-related data, which could allow an attacker to trigger a buffer overflow and potentially execute arbitrary code within the database environment.
If exploited, the vulnerability could have enabled attackers to extract sensitive data or move laterally within enterprise systems.
The discovery underscores a broader shift in cybersecurity. Historically, identifying complex vulnerabilities required highly skilled penetration testers with deep knowledge of business logic and application behavior. That process was expensive and time-consuming, leaving many critical flaws buried in large codebases for years.
AI is now removing those constraints.
By analyzing context across entire applications, these systems can identify not just technical flaws but also the conditions required to exploit them and the likely impact of an attack. This includes scenarios such as ransomware deployment, data exfiltration, or unauthorized system access. But the same capabilities are increasingly accessible to attackers.
The report warns that malicious actors are already experimenting with similar techniques using open-weight models. This raises the risk that threat actors could scan target environments at scale, identifying exploitable weaknesses faster than organizations can fix them.
The implications are significant. Security strategies that rely on obscurity or assume vulnerabilities will remain hidden are becoming obsolete. AI is making it economically viable to continuously probe large, complex systems for weaknesses.
At the same time, the research highlights limitations in existing AI-powered security tools. Many current solutions simply layer AI on top of traditional static analysis engines, inheriting the same blind spots and limitations. More advanced approaches that rely on AI as the core analysis engine are still emerging and lack standardized benchmarks.
For security teams, the message is clear. The balance between offense and defense is shifting rapidly, and the cost of falling behind is increasing.
As AI continues to evolve, the ability to understand not just where vulnerabilities exist, but how they can be exploited and what they mean for the business, will define the next generation of cybersecurity.
And increasingly, that understanding will come from machines.


