AI Supercharges Social Engineering: Why Cybersecurity Awareness Month Is a Call to Reset the Basics
- Cyber Jill

- Oct 1
- 2 min read
As Cybersecurity Awareness Month begins, experts warn that flashy new defenses may distract from an old but worsening problem: human error. Social engineering—the art of tricking people rather than breaking systems—is being transformed by artificial intelligence, and enterprises are far less prepared than they realize.
Chris Mierzwa, Sr. Director of Global Resilience Programs at Commvault, argues that organizations need a back-to-basics strategy. “As we approach another Cybersecurity Awareness Month, it serves as a stark reminder that enterprises must get ‘back to basics’ and focus on creating stronger security foundations,” he said. “Among the many different threat vectors, I implore business leaders to pay close attention to social engineering—the increasingly dangerous Achilles’ heel of every organization.”
AI-Powered Deception
In the past, phishing and vishing scams were often betrayed by poor grammar, awkward phrasing, or strange accents. That’s no longer true. With real-time voice synthesis, language translation, and adaptive large-language models, attackers can mimic authority figures, colleagues, or even family members without raising suspicion.
“Enterprises are underestimating threat actors’ ability to understand the more formidable adult psyche,” Mierzwa explained. “With the help of AI, cybercriminals can now alter their voices, accents, and launch social engineering attacks in multiple languages with real-time translation, leaving employees with no cues to suspect malicious intent.”
That lack of cues is precisely the point. If every phone call sounds natural, every email looks professional, and every text message comes in the right tone and language, the human brain’s natural filters break down. The results: more wire fraud, more credential theft, and more sensitive data leaking out of organizations that thought they were secure.
Training Gaps Leave Employees Exposed
While technical defenses such as multifactor authentication, endpoint detection, and zero-trust architecture are improving, humans remain under-protected. Most companies allocate just a few hours of annual training, often relying on outdated phishing simulations.
“Threat actors recognize that employees only receive minimal cybersecurity training, meaning they don’t have the knowledge or skill set to recognize the newest and most sophisticated threats,” Mierzwa warned.
The imbalance is stark: cybercriminals are using bleeding-edge AI models to personalize scams in real time, while defenders are relying on stale PowerPoint slides and once-a-year refresher quizzes.
A Reset for Security Awareness
The lesson from this year’s Cybersecurity Awareness Month may be that “awareness” itself needs a rethink. Security training must move from checkbox compliance toward continuous, adaptive learning. Employees should see simulations that evolve as fast as real-world scams do, and organizations must pair this with stronger authentication controls that don’t rely on human judgment alone.
AI won’t stop reshaping the threat landscape. But by reinforcing fundamentals and respecting how vulnerable the human factor truly is, enterprises can blunt its worst effects.
As Mierzwa put it: “Enterprises must get back to basics.” In 2025, those basics start with treating every employee as both the first line of defense—and the most tempting target.


