AI Adoption Is Reshaping Enterprise Security—And Exposing New Attack Surfaces, Says Aryaka VP
- Cyber Jack
- Jun 20
- 4 min read
In an era where AI is both a catalyst for innovation and a new frontier for cyber threats, enterprises are grappling with how to secure what they can’t always see. We spoke with Aditya K. Sood, VP of Security Engineering and AI Strategy at Aryaka, to unpack the complex risks AI adoption introduces. From adversarial attacks to poisoned data pipelines, Sood explains why legacy security approaches won’t cut it. He also shares how converged networking and AI-powered defense are critical to staying ahead.

How is AI adoption creating cybersecurity challenges for organizations?
AI is a must have for enterprises in 2025 and beyond. It delivers incredible business value, enabling powerful product capabilities, creating significant insights, and automating many manual workflows. But AI adoption is also introducing new vulnerabilities and attack surfaces within enterprise networks due to the unique ways AI systems operate and interact with data. One significant risk comes from the vast amounts of sensitive data that AI systems require for training and inference. If this data is intercepted, manipulated, or stolen during transfer or storage, it can lead to breaches, model corruption, or compliance violations.
Additionally, AI algorithms are susceptible to adversarial attacks, where malicious actors introduce carefully crafted inputs (e.g., altered images or data) designed to mislead AI systems into making incorrect decisions. These attacks can compromise critical applications like fraud detection or autonomous systems, leading to severe operational or reputational damage.
AI adoption also introduces risks related to automation and decision-making. Malicious actors can exploit automated decision-making systems by feeding them false data, leading to unintended outcomes or operational disruptions. For example, attackers could manipulate data streams used by AI-driven monitoring systems, masking a security breach or generating false alarms to divert attention.
If your security team is siloed from your networking team, and you rely on a mix of security point solutions, you won’t be able to guard against these sophisticated threats. This approach works fine in traditional networks, but it’s too slow and disconnected in the era of AI. For example, imagine a bad actor is inserting poisoned data into AI models used to optimize network performance. By the time the networking team realizes there’s a problem and works with the security team to resolve it, the damage will already have been done.
Where does observability come into play?
Observability is a big part of the picture here. To protect their AI ecosystems, enterprises need robust network visibility into these environments – comprised broadly of LLMs, generative AI applications, and the supporting infrastructure. Without insights into your AI workloads, you can’t enforce effective security controls.
As noted above, AI depends on data. Determining where this data is coming from – and protecting yourself from adversarial attacks – requires adequate observability. Consider the example of prompt injections on AI-powered network monitoring software. The goal of the attacker is to bypass guardrails in the AI, injecting unnecessary or falsified traffic streams into the network. The network monitoring model then gets tricked into missing anomalous behavior, fails to provide a proper prediction, and allows an attack. With full visibility, organizations can detect these sort of prompt injections and other attacks, and prevent them before they wreak havoc.
How can organizations protect themselves?
They can’t abandon AI adoption – the technology is too valuable, and they’ll need it to stay competitive with all their peers who are also leveraging artificial intelligence. Overall, enterprises need a paradigm shift in how they treat networking and security architecture. Simply put, they must move to converge networking and security together as one to take on AI. There are so many different and very sophisticated threats at play, you simply can’t reliably solve them by relying on a hodgepodge of point solutions across two different silos.
Unified SASE is the ultimate evolution of networking and security convergence today. Unified SASE allows organizations to operate, analyze, and optimize every aspect of their networks from a single location. Rather than using a bunch of disparate solutions to protect their network, they get all the functionalities they need – from firewalls to secure web gateways – all in one place. This brings needed simplicity to a very complex challenge (supporting GenAI).
Is there a role for AI in solving the AI security challenge?
Absolutely. Ironically, AI adoption is introducing security risks, but the technology can also help equip organizations with the capabilities and insights to thwart bad actors. As attackers increasingly leverage AI to craft sophisticated, evasive threats—like deepfakes, polymorphic malware, and automated phishing—defenders must also turn to AI to match speed, scale, and complexity. AI-driven tools can detect subtle anomalies, automate threat hunting, and adaptively respond to new attack patterns in real time with confidence, closing the gap between detection and mitigation. This is possible because AI models analyze vast network traffic, identifying anomalies, suspicious behavior, or indicators of compromise (IOCs) that might go undetected by traditional methods.
Additionally, AI's potential in behavioral analytics is significant, creating profiles of normal user behavior to detect insider threats or account compromises. But its most potent application is predictive analytics, where AI systems forecast potential vulnerabilities or attack vectors, enabling proactive defenses before threats materialize.
Moreover, AI is an essential partner in safeguarding digital ecosystems. It can help secure AI systems by monitoring model behavior for signs of misuse or adversarial manipulation, enforcing access controls, and validating data integrity. Solving the AI security challenge requires a 'fight fire with fire' approach—leveraging AI as an essential partner in safeguarding digital ecosystems.