top of page

Living off the Land Attacks are Jeopardizing our Most Critical Infrastructure and Organizations

In this interview, Mayank Kumar, Founding AI Engineer, DeepTempo, breaks down why living off the land attacks have become a favored tactic for advanced adversaries, why detection remains so difficult even with modern AI, and what defenders are missing when they focus on individual alerts instead of intent and sequence. His insights offer a clear, and sobering, view into why critical infrastructure is uniquely exposed and what CISOs must rethink to stay ahead.

Mayank Kumar, Founding AI Engineer, DeepTempo

We’re hearing more lately about “Living off the Land” attacks that target critical infrastructure. What exactly are they, and how do they differ from more traditional cyberattacks?


Living off the Land attacks exploit what’s already inside the victim’s environment. Instead of introducing new malware that can be scanned and flagged, attackers repurpose legitimate administrative tools, scripts, and processes to perform malicious actions. PowerShell, WMI, and PsExec are common tools and languages targeted. Because the attacker is using software that’s trusted and already whitelisted, there’s nothing obviously foreign for cyber defense technologies to detect. The trick is not the tool; it’s the sequence. Each command is legitimate, but the chain isn’t. 


Why are these attacks so difficult to detect, even for organizations that already have advanced monitoring and AI-based defenses in place?


Detection is difficult because LOTL attacks don’t trigger the signatures or behavioral flags that traditional security tools are built to recognize. Security operations centers are trained to look for binaries, malware signatures, or traffic patterns that appear unnatural. But in a LOTL scenario, everything looks normal. The commands are valid, the user accounts are real, and the activity occurs inside approved applications.


The attackers often blend into regular administrative or maintenance activity. For example, when a legitimate system administrator runs PowerShell scripts at 2 a.m., that might not raise an alert. Attackers know this. They exploit that gray zone. Even machine learning models trained on historical threat data can miss these events, because there’s little to no difference between normal and malicious execution until it’s too late. The malicious logic only appears when actions are viewed in sequence that something these detectors don’t do.


How widespread are these attacks, particularly among nation-state adversaries and critical infrastructure targets?


LOTL techniques are now standard across advanced persistent threat campaigns, especially in sectors where operational technology and legacy systems limit visibility. Critical infrastructure environments, energy, manufacturing, and transportation are frequent targets due to their complex system interdependencies and minimal endpoint instrumentation.


These attacks typically unfold over weeks or months. Initial access may occur through exposed credentials, legacy interfaces, or misconfigured identity systems. From there, attackers use native utilities to enumerate systems, establish persistence, and move laterally, often across encrypted east-west paths.


What makes them hard to detect is not the tooling, but the sequencing. Each action aligns with normal operations in isolation. The maliciousness only emerges when those steps are viewed as part of a larger, structured progression. Detecting that requires models that can infer intent from temporal and structural patterns, not just flag isolated anomalies.


You mentioned that even AI struggles to stop these attacks. Why is that?


The issue isn’t that AI can’t help; it’s that most current applications of AI in threat detection are still anchored to legacy detection logic. Whether rules-based or machine learning-based, many systems rely on predefined patterns or statistical deviations. That approach breaks down when the activity isn’t anomalous.


LOTL attacks deliberately avoid recognizable patterns. They operate within expected norms, using trusted tools and valid credentials. The surface-level signals appear routine.


Most AI today is statistical pattern recognition, not sequence reasoning. To detect that LOTL kind of behavior, models need more than pattern recognition. They need context: understanding not just what happened, but why it happened and how it fits into the broader sequence. Without that reasoning capability, even AI-based systems struggle to differentiate malicious logic from normal operations.


So how can organizations realistically strengthen their defenses against LOTL threats?


Effective defense against living-off-the-land activity requires detection systems that move beyond isolated event analysis. The signal is rarely in a single command or process. It’s in the sequence, how actions relate to each other over time.


Models that evaluate structured telemetry through a sequence-aware lens are better equipped to infer intent. Rather than asking “Is this process suspicious?”, they ask “Does this activity make sense given the environment’s normal execution logic?” That’s a more complex question, but it’s also the one that aligns with how modern intrusions unfold.


Context is essential. Who initiated the activity, from where, under what credentials, and with what propagation pattern? These signals, captured across flow, identity, and system logs, can expose malicious logic that static rules or statistical models overlook. Defense requires models that infer the logic of activity across identity, network, and system data.


Finally, detection must assume persistence. LOTL is not a one-shot technique. It’s iterative. That makes continuous reasoning over activity, not point-in-time inspection, a necessary architectural shift for defenders operating in critical environments.


In short, what’s the takeaway for CISOs and infrastructure leaders?


LOTL attacks succeed not because defenders lack data, but because the data lacks interpretation. The core challenge isn’t visibility, it’s inference. Every environment produces high volumes of telemetry. What matters is whether your detection architecture can reason over that telemetry to distinguish operational logic from malicious progression.


For CISOs, the implication is clear: effective defense requires models that understand context, not just content. It’s not about layering more tools. It’s about deploying systems that recognize intent, early, quietly, and before impact occurs. If your detection architecture can’t infer intent from normal activity, you will miss LOTL every time.

bottom of page