top of page

Cybercrime's AI Arms Race: Flashpoint Warns Defenders to Evolve or Fall Behind

In the unfolding battleground of cybersecurity, artificial intelligence is no longer just a tool for defenders—it’s also rapidly becoming a weapon of choice for attackers. From multilingual phishing campaigns to custom-built AI models for fraud, adversaries are exploiting generative technologies to scale, deceive, and disrupt at a level that would have been science fiction just five years ago.


Flashpoint, a leading threat intelligence firm monitoring over 100,000 illicit sources across the dark web, Telegram, and underground AI communities, has issued a stark warning: the AI revolution is well underway—and the criminals are ahead of the curve.


Between January and May 2025, Flashpoint analysts logged over 2.5 million AI-related posts across underground channels. These ranged from jailbreak prompts and deepfake-for-hire ads to sophisticated phishing toolkits and tailored language models trained for evading detection. The takeaway? The era of “script kiddies” armed with off-the-shelf malware is giving way to AI-native threat actors who can iterate, personalize, and deploy at scale.


“Threat actors aren’t waiting for the enterprise world to figure out how to use AI,” said a Flashpoint spokesperson. “They’re already experimenting, refining, and operationalizing it across every phase of the attack chain.”


The New Asymmetry


This surge in AI-driven cybercrime presents a daunting asymmetry. Attackers no longer need technical mastery—they need access. Pretrained models, open-source tools, and community-driven jailbreaks make it easier than ever to launch convincing scams, generate malicious code, or create synthetic media indistinguishable from reality.


And the tools are only getting better.


Custom LLMs like WormGPT and FraudGPT have emerged in underground forums, boasting capabilities tailored for criminal use cases—everything from crafting persuasive phishing emails to writing malware with evasion techniques baked in. Add in the rise of pay-per-prompt services and AI-as-a-Service models, and it’s clear: cybercrime is scaling with an efficiency that defenders can’t counter with traditional playbooks.


Enter the Defender’s Guide


Flashpoint’s newly released report, AI and Threat Intelligence: The Defenders’ Guide, seeks to reset the conversation. Rather than chasing hype, it offers a grounded look at how defenders can realistically adopt AI to meet the moment.


“The question isn’t just how AI is being used by threat actors—it’s how that activity transforms the risk environment for defenders,” the report states.


Among its insights, the guide urges cybersecurity leaders to:


  • Challenge assumptions about AI capabilities and limitations


  • Identify operational blind spots introduced by AI-generated threats


  • Combine analyst insight with AI-assisted triage for faster, more accurate decision-making


  • Stop chasing shiny objects and instead focus on workflows that tangibly boost defensive outcomes


In short, it’s not about matching AI for AI’s sake—it’s about knowing when automation can augment human expertise, and when it can’t.


Myths, Missteps, and the Path Forward


One of the most valuable sections of the report debunks persistent myths, such as the belief that AI can autonomously detect and stop zero-day threats or that LLMs can replace trained analysts. While AI excels at pattern recognition and scaling mundane tasks, it still lacks the nuance, intuition, and ethical judgment that seasoned defenders bring to the table.


“Organizations must strike the right balance,” the guide emphasizes. “That means pairing the brute force of automation with the strategic lens of intelligence.”


Scaling with Precision


Flashpoint’s strategy reflects a broader truth emerging across the cybersecurity sector: speed alone isn’t enough. The winners in this AI arms race will be those who can not only keep pace but do so with clarity and context. That means understanding when to automate, when to escalate, and when to let the human brain take the wheel.


As AI continues to rewrite the cyber threat landscape, the message is clear: adapt deliberately, deploy strategically, and never mistake automation for omniscience.


Because in the age of AI-powered crime, defenders can’t afford to play catch-up.

bottom of page