top of page

AI Is Reshaping Cybersecurity—But Most Teams Aren’t Ready for It

Deep Instinct’s 2025 Voice of SecOps report reveals a growing gap between AI ambition and preparedness.


AI is fundamentally transforming how organizations approach cybersecurity—but that transformation may be running ahead of actual readiness. According to Deep Instinct’s sixth annual Voice of SecOps report, released today, most security teams are leveraging artificial intelligence to protect against an evolving landscape of AI-powered threats. Yet despite rising adoption, confusion and burnout are threatening to undermine its potential.


The report—titled Cybersecurity & AI: Promises, Pitfalls – and Prevention Paradise—surveyed 500 senior cybersecurity professionals across the U.S. and paints a complex picture. Nearly three-quarters (72%) of respondents have overhauled their security strategies due to AI in the last year, and 86% say they’ve actively increased their use of AI within security operations. However, more than two-thirds admitted to confusion over basic AI concepts. A full 38% couldn’t clearly differentiate between machine learning and deep learning.


That knowledge gap has serious implications, especially as attackers themselves begin weaponizing AI to increase the scale, precision, and deception of cyberattacks.


Threats Evolve, But So Do Mistakes


AI-generated phishing attacks and deepfake impersonations are no longer theoretical. Nearly half of organizations reported an increase in phishing campaigns, and 43% said they’ve faced deepfake-driven impersonation attempts—highlighting a sharp rise in “synthetic identity” threats.


Storage attacks also surged to the forefront of concerns, with 83% of security leaders citing risks to local and cloud environments, trailing just behind phishing at 84%.


“Cybersecurity teams are being asked to do more, with less—and to do it faster than ever,” said Lane Bess, CEO of Deep Instinct. “The traditional ‘detect and respond’ cybersecurity model is broken – it’s reactive, expensive, and no match for AI-powered threats.”


AI: Both a Lifesaver and a Stressor


The upside? When AI works, it works. The majority of respondents (76%) say AI has made their jobs easier, with automated tools saving teams an average of 12 hours per week in manual tasks. That time savings has allowed SecOps teams to redirect focus toward strategic threat modeling and proactive defense.


But the benefits come with a cost. Nearly 70% of those surveyed say AI is contributing to professional burnout—largely due to implementation complexity, constant adaptation to new tools, and the pressures of staying compliant with emerging regulations.


Adding to the anxiety: over one-third of respondents fear that new AI-related rules could translate into future financial penalties for their organizations, and 32% say they’re already struggling to keep up with the evolving legal landscape.


A Pivot Toward Prevention


In response to the rapid evolution of threats, organizations appear to be leaning into proactive defense. More than 80% of companies are shifting toward a “prevention-first” cybersecurity posture, a departure from reactive monitoring that has dominated for years. That shift is increasingly driven by the C-suite, with 64% of security teams reporting executive pressure to deploy more forward-leaning measures.


This is where Deep Instinct sees its role. The company advocates for “preemptive data security”—a strategic approach that uses deep learning to neutralize threats before they can execute.


“To win this fight, cybersecurity teams must shift from chasing threats to preventing them,” said Bess. “Preemptive data security – powered by deep learning, the most advanced form of AI – is the only way for SecOps teams to regain control and stay ahead of adversaries.”


The Road Ahead


Deep Instinct’s findings underscore a paradox: while AI is a powerful enabler, it’s also a force multiplier for both defenders and attackers. And unless security teams can keep pace with the technology they deploy, the risk may outpace the reward.


The report acts as both a pulse check and a warning. Organizations must invest not only in AI tools, but in education, resilience, and well-supported security personnel to keep AI’s promise from becoming a pitfall.

bottom of page