top of page

AI Supercharges Social Engineering: LevelBlue Warns of a Human-Centric Security Crisis

The weakest link in enterprise security isn’t a misconfigured firewall or unpatched server—it’s people. A new report from managed security services firm LevelBlue warns that social engineering, already the most exploited vector in cybercrime, is entering a dangerous new phase as attackers lean on generative AI to craft more convincing and scalable schemes.


AI Turns Old Tricks Into New Threats


Social engineering is hardly new. Phishing emails, fraudulent texts, and business email compromise have long plagued enterprises. But LevelBlue’s Data Accelerator: Social Engineering and the Human Element report highlights a stark shift: 44% of security leaders expect to face an AI-powered attack within the next year, yet fewer than a third believe they’re ready for it.


The growing sophistication of AI-generated audio, video, and text means employees are now struggling to tell signal from noise. Nearly six in ten organizations say it has become significantly harder for staff to distinguish real communications from fraudulent ones. Deepfake-driven identity scams, voice cloning, and automated phishing campaigns are rapidly outpacing traditional awareness training.


The Readiness Gap


Despite heightened awareness, corporate defenses remain fractured. While a slim majority of respondents reported preparedness for familiar threats like phishing (51%) and business email compromise (56%), far fewer felt equipped to handle emerging techniques. Only 32% said they are prepared for deepfakes or synthetic identity attacks, and just 20% rated themselves “highly effective” in defending against adversaries leveraging AI.


That leaves a yawning gap between awareness and action. Many firms are investing in cyber resilience frameworks (33%) and experimenting with generative AI for defense (31%), but fewer are shoring up foundational safeguards. Zero Trust adoption, a key architectural control to limit the blast radius of successful social engineering, remains notably low at 13%.


Europe Leads, U.S. Lags


The research, based on a survey of 1,500 senior executives across 14 countries, reveals stark geographic disparities. Europe stands out as the most prepared region for AI-driven attacks, with two-thirds of respondents expressing confidence. European organizations also lead in prioritizing workforce education, recognizing that technology alone won’t solve a fundamentally human problem.


By contrast, U.S. state and local government entities—the so-called SLED sector—show some of the lowest preparedness levels, a troubling sign given the volume of sensitive citizen data and critical infrastructure they manage.


The Culture Problem


For Theresa Lanowitz, Chief Evangelist at LevelBlue, the findings underline a structural flaw in how enterprises approach cyber risk. “Establishing a culture of cyber resilience is imperative for organizations to effectively prepare for the emergence of more sophisticated social engineering attacks,” Lanowitz said. “These attacks exploit human behavior, so without the proper investment into education and training, including cyber resilience processes and engaging cybersecurity consultants, organizations and their employees remain vulnerable.”


The reality is that most companies still view employee awareness training as a compliance checkbox rather than a continuous, evolving necessity. Only 20% said they are confident in their workforce education strategy, and just under a third have engaged outside experts to run awareness programs in the past year.


Beyond the Training Manual


LevelBlue’s recommendations reflect a layered defense strategy. Leadership buy-in is critical to embed resilience into culture, but it must be paired with regular training, smart investment in AI defenses, and engagement with external providers who specialize in human-centric risk. Without that balance, organizations risk fighting 2025’s threats with 2015’s playbook.


The takeaway is clear: AI has turned social engineering into a force multiplier for attackers, making scams more persuasive and far more scalable. While companies are racing to deploy their own AI tools in defense, the real challenge may be cultural—getting employees, managers, and executives alike to recognize that cyber resilience is no longer optional.

bottom of page