AI Turns Battlefield: Attackers and Defenders Race in Cyber’s New Era
- Cyber Jill

- Oct 20
- 4 min read
When the researchers at Forcepoint published their latest intelligence briefing, what emerged looked less like typical malware trends and more like a full-blown arms race. On one side: cybercriminals supercharging their arsenals with malicious large language models (LLMs) and deepfakes. On the other: defenders scrambling to deploy “agentic” AI, multistream telemetry and behavioural models to keep pace. It’s a clear message: AI is now the front line in cybersecurity.
Here’s what the research uncovered — and why it matters.
The Offensive Surge: AI in the Hands of Attackers
According to Forcepoint’s analysis, adversaries are no longer simply exploiting vulnerabilities — they are weaponising AI to scale and sophisticate attacks.
Threat actors are grabbing malicious LLM variants (such as FraudGPT or WormGPT) and using them to automate phishing-kit creation, generate full-fledged malware, and roll out social-engineering campaigns in bulk.
Deepfake technology has graduated from gimmick to multi-million-dollar fraud. Forcepoint cites a confirmed case of a “£20 million video-call scam” in which company officers were impersonated via synthetic video to authorize a fraudulent transfer.
Attackers are increasingly drawing on reinforcement-learning (RL) techniques to train generative models that automatically evolve polymorphic payloads—malware that changes its internal structure to slip past endpoint protections.
What’s striking is not just the novelty, but the convergence: generative AI + social engineering + automation = a threat footprint far beyond anything signature-based defence was designed to manage.
Defence by Adaptation: AI for the Protectors
It’s not one-sided. The report emphasises how defenders are turning the same foundational technology against adversaries. Forcepoint highlights several layers of progress:
Multi-layered ML/DNN classifiers now stream real-time telemetry across endpoints and cloud services to catch threats before they manifest as infections.
Emerging platforms (in development or early deployment) are combining SSH-log analysis, domain-scoring, phishing-URL heuristics and anomaly detection into unified systems capable of spotting previously unseen threats.
“Agentic AI” is creeping into Security Operations Centres: autonomous modules that triage alerts, trigger containment workflows and reduce dwell time without waiting for human approval.
As Forcepoint puts it, the defence paradigm is shifting: from static rules and signatures to adaptive, context-aware systems that “move at the speed of data.”
The Arms-Race Dynamic
What unites both sides of this story is speed and adaptation. For every defensive innovation, attackers respond. The report frames the landscape as:
“Every algorithm that learns to protect can also be turned to exploit.”
Practically this means:
Attackers probe defenders’ models, extract patterns or blind spots, then feed RL or generative systems to evolve around that. Or simply adopt plug-and-play malicious models from dark-web vendors.
Defenders must train models continually, diversify telemetry streams, and decentralise detection logic so that even polymorphic malware cannot easily hide.
It’s a feedback loop: adversaries test, defenders update, adversaries adapt — and so on. The strategic edge belongs to whoever can learn faster and respond in real time.
Actionable Recommendations for Enterprises
Forcepoint offers a clear set of actionable steps. Here are the key take-aways:
Governance & Controls
Enforce out-of-band verification for high-value or urgent transactions. The era of convincing video-calls is here.
Implement multi-person approval for critical workflows (especially transfers). Even synthetic impersonation can be foiled by process.
Pursue zero-trust architecture: every user, every device, every request must be continuously validated. Attackers exploit synthetic identities and real-time social engineering.
Detection & Response
Deploy behavioural/anomaly-based AI models, not just signatures. These models detect deviation in identity, posture, flow and context.
Automate triage and containment workflows so dwell time shrinks: threats surface to active counter-measures rather than waiting in queue.
Red-team your ML pipelines: feed adversarial inputs, probe blind spots, test your defences from the inside.
People & Processes
Train staff to recognise AI-crafted phishing, deepfake audio/video fraud, and social-engineering automation. Human intuition remains vital.
Establish clear escalation paths for suspicious media or anomalous requests. Automation assists, but humans still decide.
Keep humans in the loop for sensitive actions: data destruction, credential revocation, large transfers still merit a human checkpoint.
Continuous Adaptation
Monitor dark-web marketplaces and chat-forums for emerging malicious AI tools; these reveal early indicators of attacker capability.
Share threat intelligence across industry groups; attackers are exploiting publicly emergent LLM-based toolkits.
Pick vendors whose models are continuously updated and whose logic is explainable — AI without transparency is risk.
Looking Ahead
The takeaway is stark: AI will not end cyber-crime, but it will redefine it. Organisations that thrive will be those that recognise AI as both a weapon and a shield — and build systems that can adapt, govern and respond at machine-speed.
At the heart of this shift lies a jobs question: would you rather use AI to stop an attack that took two minutes to plan, or be reacting to one you never saw? Forcepoint’s research suggests the survival of the fittest in cybersecurity will be decided by who adapts fastest, not just who has the strongest firewall.
As one executive at Forcepoint aptly summarised:
“If you cannot classify, you cannot protect.”
In the AI-augmented battleground of cyber-defence, classification is simply the starting line.


