top of page

AI Is Changing the Rules of Cybersecurity — and Awareness Month Needs to Catch Up

Every October, organizations roll out familiar reminders for Cybersecurity Awareness Month: update your passwords, enable multi-factor authentication, and don’t click suspicious links. But as companies rush to embed generative AI across their workflows, those traditional talking points are starting to sound outdated.


“Generative AI is everywhere—and most tools require access to your organization’s most confidential data,” said Khash Kiani, Head of Security, Trust, and IT at ASAPP. “This Cybersecurity Awareness Month, leaders need to go beyond the basics and understand the new wave of risks generative AI introduces.”


The Invisible Threats Lurking in AI Systems


The security conversation is shifting from firewalls and phishing emails to far more subtle attacks that exploit the probabilistic nature of AI itself. Kiani points to two of the most concerning techniques: prompt injection and data poisoning.


Prompt injection allows attackers to sneak malicious instructions into data an AI model processes—sometimes hidden within ordinary inputs like customer messages or product reviews. These invisible commands can trick a model into revealing sensitive data, generating misinformation, or even executing unintended actions through connected systems.


Data poisoning, meanwhile, corrupts the very knowledge the AI learns from. If attackers seed false or malicious data into a company’s training corpus or retrieval sources, the AI could absorb harmful patterns—leading it to leak confidential data or make catastrophic errors later.


“Everyone knows the general concept of cybersecurity,” Kiani said, “but few are prepared for emerging threats like prompt injection and data poisoning. These are subtle, dangerous, and often invisible ways in which AI systems can be manipulated.”


From Code Security to Cognitive Security


Traditional software security relies on deterministic testing: a fixed input should always produce a predictable output. But generative AI doesn’t work that way. Its outputs are based on probabilities, shaped by vast and dynamic data sources. That unpredictability makes traditional testing blind to certain categories of risk.


“With traditional deterministic software, security testing can identify most vulnerabilities,” Kiani noted. “But with generative AI, the same reviews may miss nuanced risks—like a malicious prompt hidden in customer feedback that bypasses controls, or two AI agents communicating in ways that leak sensitive data.”


This new paradigm has sparked the rise of AI red teaming, model governance frameworks, and secure-by-design pipelines—initiatives that merge security and data science expertise. Yet most organizations are still catching up. A 2025 Deloitte survey found that fewer than 15% of companies running generative AI tools have conducted dedicated AI threat modeling.


Building Trust in the Age of Autonomous Intelligence


Kiani’s warning underscores a broader industry reality: cybersecurity is no longer just about protecting systems, it’s about protecting the integrity of intelligence itself. As enterprises deploy AI agents that autonomously draft code, analyze documents, or handle customer data, the margin for error has never been thinner.


“AI security isn’t just about protecting systems anymore—it’s about safeguarding the integrity of the intelligence you build,” Kiani said.


This Cybersecurity Awareness Month, experts argue that the call to action must evolve. It’s no longer enough to teach users to spot phishing scams—organizations need to train their employees and developers to think adversarially about how AI models can be manipulated, subverted, or deceived.


Because in 2025, cybersecurity awareness isn’t just about knowing what not to click. It’s about knowing what your AI might unknowingly learn.

bottom of page