AI Turns Insiders Into the Biggest Security Risk, Exabeam Warns
- Cyber Jack
- 5 minutes ago
- 3 min read
A new multinational study from Exabeam shows that insider threats have leapfrogged external attackers as the top concern for enterprise security teams, a shift fueled by the rapid adoption of artificial intelligence. The report, From Human to Hybrid: How AI and the Analytics Gap Are Fueling Insider Risk, surveyed more than 1,000 cybersecurity professionals across industries and geographies. The verdict is clear: AI is making it easier for trusted accounts, both human and machine, to go rogue.
Insiders Eclipse Hackers at the Gate
Sixty-four percent of respondents now believe insiders pose a greater danger than outside attackers. That includes both malicious employees and legitimate accounts that have been compromised. The perception is not just theoretical—53 percent of organizations reported a rise in insider incidents over the past year, and more than half expect that trend to accelerate over the next 12 months. Government agencies, manufacturing, and healthcare anticipate the steepest increases, while Asia-Pacific shows the highest regional growth in insider risk awareness.
“Insiders aren’t just people anymore,” said Steve Wilson, Chief AI and Product Officer at Exabeam. “They’re AI agents logging in with valid credentials, spoofing trusted voices, and making moves at machine speed. The question isn’t just who has access — it’s whether you can spot when that access is being abused.”
AI Supercharges Attacks From Within
Artificial intelligence is proving to be a double-edged sword. While security teams deploy AI tools to accelerate detection, attackers are using the same technology to sharpen their playbooks. Two of the top three insider vectors today are AI-related, with AI-driven phishing and social engineering leading the pack. These campaigns are harder to detect because they can adapt in real time and mimic authentic communications.
Unauthorized use of generative AI inside enterprises adds another layer of risk. Seventy-six percent of organizations report some level of unapproved GenAI use by employees, with the highest rates in technology, finance, and government. In some regions, such as the Middle East, shadow AI adoption is now seen as the top insider concern.
Kevin Kirkwood, Exabeam’s CISO, underscored the stakes: “AI has added a layer of speed and subtlety to insider activity that traditional defenses weren’t built to detect. Security teams are deploying AI to detect these evolving threats, but without strong governance or clear oversight, it’s a race they’re struggling to win. This paradigm shift requires a fundamentally new approach to insider threat defense.”
Programs Exist, but Detection Lags
Nearly nine in ten organizations say they have insider threat programs, but fewer than half are using user and entity behavior analytics (UEBA), the baseline capability needed to spot abnormal activity. Instead, many rely on training programs, endpoint tools, and access controls that provide visibility but not the behavioral context necessary to identify subtle misuse.
Exabeam’s own strategists argue that the missing piece is context. As security researcher Steve Povolny put it, “Insider threat detection is fundamentally limited by a lack of cohesive visibility and contextual awareness. Privacy concerns, fragmented tools, and the challenge of interpreting intent are creating critical blind spots that security leaders can’t afford to ignore.”
The Leadership Disconnect
Perhaps most troubling is the leadership gap. Seventy-four percent of security professionals believe their executive teams underestimate the severity of insider risk. That misalignment delays investment in analytics and slows the adoption of policies that could narrow detection gaps. While nearly all respondents (97 percent) report some form of AI deployment in their insider threat programs, frontline analysts say many of those deployments are still in pilot phases—far from the fully operational systems their executives believe exist.
The New Insider: Human and Machine
The rise of AI agents as “non-human insiders” introduces new complexities. These tools can act with real credentials, access enterprise systems, and operate autonomously. They are not inherently malicious, but if left unmonitored they create dangerous blind spots.
Wilson emphasized the shift: “We’ve observed behaviors like unauthorized access attempts and policy workarounds often unnoticed by traditional controls. Treating these agents as trusted extensions of users, without additional monitoring, creates a dangerous blind spot.”
Closing the Gap
The Exabeam report concludes that solving the insider threat problem requires more than technology investment. Organizations must align leadership priorities with operational realities, build governance models for AI adoption, and expand visibility into both human and machine activity. Without those changes, the very AI tools meant to accelerate productivity risk becoming the most dangerous insiders of all.