It’s Time for AI in Cybersecurity to Earn its Keep
- 3 minutes ago
- 4 min read
This guest article was contributed by Seth Goldhammer, Vice President of Product Management, Graylog

For some time AI has been talked about like a miracle cure for cybersecurity problems. It is supposed to help security teams cut through alert noise, spot threats faster and automatically respond to incidents. And while some of that is starting to happen, many security leaders are now asking, “Is any of this actually helping?”+-That question is being asked more frequently in team meetings, in boardroom meetings and during annual budget planning. The discussion around AI in security is shifting. People aren’t impressed by AI anymore just because it sounds smart. They want results. If a tool claims to use machine learning or behavioral analysis, the follow-up question is, “What has it done for the security analyst in the last quarter? Has it saved time? Has it caught something they would’ve missed? Reduced alert noise?”
The same goes for Return on Investment (ROI). McKinsey found that more than 70% of organizations already using AI plan to spend more, hoping to see clearer financial results. Cybersecurity teams want the same thing – a clear answer on what they’re getting in return.
The Pressure to Prove Value Is Growing
Security teams are under pressure due to tight budgets, talent shortages, nonstop alerts, and increasingly complex environments. It’s no surprise that leaders are starting to hold technology to a higher standard, including AI tools.
What used to be a “nice-to-have” feature has become a budget item that has to be justified. If a security feature is being sold as an AI-driven solution, it must deliver real impact, whether its faster detection, quicker investigations, or reduced analyst burnout. The pressure isn’t coming just from within security teams, either. Boards and executive teams are also demanding results. They want to know that dollars spent are making systems safer and teams more effective.
This accountability demand is where many AI tools start to fall short. They may be technically impressive, but if they don’t fit into the team’s actual workflow, or if they take too much effort to maintain, they end up creating more work than they save.
We Built Data Lakes. Now What?
One area that’s getting a second look is the massive investment many companies have made in data lakes. For years, the thinking was to collect everything - every log, every event, every alert and figure out the insights later. These systems were often designed and managed by data engineering or analytics teams with the idea that, at scale, they would power smarter security through AI.
But in practice, these setups don’t always align with the day-to-day needs of a SOC. Security analysts don’t need all the data. But, they do need the right data, at the right time, with enough context to make decisions quickly. When data is locked away in a slow or complex system, it can actually reduce response times. And if the data has not been conditioned appropriately for downstream analytics, it could result in inaccurate findings. It’s great for post-incident analysis, but not so much for real-time detection and triage. And there’s often an internal disconnect too. Data teams and security teams don’t always speak the same language or have the same priorities. As a result, tools and platforms that look powerful on paper don’t get used the way they were intended.
It’s Not Just About the Tech, It’s About People Too
A lot of the problems with AI tools aren’t about the technology itself. They are about how people are expected to use them. Security teams are already stretched thin. They don’t have time to monitor a system that needs chatting and manual validation. I recall earlier in my career a SOC manager asking for keyboard shortcuts to triage alerts because using a mouse consumed too much time. I wonder how he feels about analysts typing in full sentences to perform data analysis? That’s why, if an AI tool flags something, analysts need to understand why. For example, what changed, what triggered the alert. If the tool can’t explain that clearly, it becomes more of a problem than a help. In the end, analysts won’t trust it and might even start ignoring it.
Another challenge is that many AI tools were built for ideal conditions such as clean data, clear labeling and fully staffed SOCs. However, most real-world environments are messy. Teams are juggling competing priorities, reacting to potentially urgent threats, and trying to do more with less. Tools that can’t adapt to that reality are not going to be used.
Another challenge is that many AI tools were designed for perfect conditions such as clean data, clear labels and fully staffed security teams. But real-world environments are far from perfect. Teams are dealing with constant threats, limited time, and not enough people. If a tool can’t fit into this environment, it simply won’t get used. While Large Language Models are better at inconsistent data than Machine Learning algorithms, the more you ask an LLM to draw conclusions the greater potential for hallucination. Conditioning data aligned to the questions you want to ask yields the best results. And reviewing the ‘thought process’ an LLM uses to make determinations allows an analyst to recognize if a logic error was introduced.
Outcome Over Optics
What’s becoming clear is that AI in cybersecurity isn’t exciting just because it sounds smart anymore. It has to be useful. Teams want tools that actually help them save time, catch things they’d miss and make their work easier. This is changing how both buyers and vendors talk about value. Security teams don’t want a tool they can’t understand. They want something they can trust, that fits into their workflow, and makes their job easier. If an AI tool can do that, even in small ways, it’s worth keeping.
The Real Test Is Happening Now
AI has practical application in security operations that lives below the hype. We’ve moved past what’s possible to what actually works. Security teams don’t have time for theory, but what they do need and want is tools that deliver under pressure and in real-world situations. There is a shift happening in how we use AI, judge it, and decide whether it’s worth trusting. The buzz hasn’t disappeared, but it’s more grounded and more focused. We’re done asking what AI might do someday. The real question now is what is AI doing for our security efforts? That’s the benchmark that counts.