top of page

AI Is Supercharging Cybercrime Faster Than Defenses Can Keep Up

Artificial intelligence is no longer just changing how organizations defend themselves online. It is fundamentally reshaping how cybercrime works, compressing the time, cost, and skill required to launch attacks that once demanded large criminal operations or nation-state backing.

That is one of the clearest signals emerging from the World Economic Forum’s Global Cybersecurity Outlook 2026, which finds that cyber-enabled fraud and phishing have overtaken ransomware as the top concern among chief executive officers, while AI-related vulnerabilities are now rising faster than any other category of cyber risk.

The shift reflects a deeper transformation underway in the threat landscape. Generative AI and autonomous agents are not just making attacks more convincing. They are industrializing them.


“One area of the report that deserves more attention is the acceleration of AI-enabled cybercrime. Surveyed CEOs rank cyber-enabled fraud and phishing as their top concern, with AI vulnerabilities now their second-most pressing issue,” said Piyush Sharma, CEO and co-founder of Tuskira.


According to the report, 87 percent of surveyed leaders say AI-related vulnerabilities grew over the past year, the fastest increase among all tracked cyber risks. At the same time, 73 percent of respondents report that they or someone in their professional or personal network was directly affected by cyber-enabled fraud in 2025.


The numbers help explain why boards are paying closer attention. AI is making scams more scalable and harder to detect, from deepfake voice impersonations of executives to hyper-personalized phishing messages that adapt in real time. The report documents how attackers are increasingly using AI models trained on leaked or stolen data to replicate writing styles, voices, and cultural nuances, lowering the odds that victims will recognize a threat.


“AI is strengthening defense, but it is also collapsing the attacker cost curve, scaling highly convincing social engineering, deepfakes, and manipulation of AI systems, including prompt injection,” Sharma said.


This dual-use reality sits at the core of the cybersecurity dilemma heading into 2026. Organizations are racing to deploy AI internally for threat detection, fraud prevention, and automated response. The report finds that 77 percent of organizations now use AI in their security operations, primarily for phishing detection, anomaly response, and behavioral analytics. Yet governance has not kept pace. Nearly one-third of organizations still lack any formal process to assess the security of AI tools before deployment.


That gap becomes even more dangerous as autonomous AI agents move from experimentation into production environments. These systems can take action without constant human oversight, expanding both defensive capability and risk exposure.


“Autonomous AI agents can be a force multiplier for security operations, but they also expand the attack surface when deployed without clear scope, least-privilege access, and continuous validation of what they can do and what they are doing,” Sharma warned.

The report echoes that concern, noting that AI agents introduce a proliferation of machine identities, credentials, and permissions that traditional security models were never designed to manage. Without strong guardrails, agents can accumulate excessive access, be manipulated through adversarial prompts, or propagate errors at machine speed.


Despite growing awareness, the cybersecurity outlook remains uneven across regions and sectors. Confidence in national preparedness for major cyber incidents continues to decline, with nearly a third of respondents expressing low confidence in their country’s ability to respond to attacks on critical infrastructure. Public-sector organizations report especially high levels of insufficient cyber resilience, underscoring the systemic nature of the challenge .

For Sharma, the lesson is not to slow AI adoption, but to apply it with far more rigor than most organizations do today.


“The practical response is to apply AI with discipline: continuously test defenses the way attackers do, detect misconfigurations and broken controls early, and prioritize vulnerabilities based on real-world reachability, exploitability, and business impact, not static severity,” he said.


That approach aligns with a broader theme running through the World Economic Forum’s findings: resilience now depends less on isolated tools and more on integrated, intelligence-driven systems. Organizations that unify signals across identity, cloud, application, and AI layers are better positioned to move away from alert fatigue and toward automated prevention.


“When these signals are unified across the security stack, teams can shift from alert-driven reaction to an autonomous prevention-and-response loop that reduces risk and operational load,” Sharma added.


As AI continues to blur the line between offense and defense, the report makes one thing clear. Cybersecurity is no longer just a technical function or a compliance exercise. It is a strategic discipline that will determine whether organizations can safely harness AI’s benefits or become casualties of its misuse.


In 2026, the arms race is no longer theoretical. It is running at machine speed.

bottom of page