top of page

Upwind Pulls the Curtain Back on AI in Cloud Security

For years, cloud security teams have been promised that artificial intelligence would simplify their lives. In practice, much of that AI has arrived sealed inside opaque interfaces—systems that spit out answers without showing their work. As cloud environments sprawl and risk signals multiply, that lack of transparency has become a liability rather than a feature.


This week, Upwind is betting that the next phase of AI-driven cloud security won’t be about smarter black boxes, but about making the logic itself visible. The company announced the general availability of Choppy AI, a set of natural-language capabilities embedded across its CNAPP platform that aim to accelerate investigation and policy creation without asking security teams to surrender control.


At a time when cloud estates are defined by millions of ephemeral assets, tangled service relationships, and continuously shifting exposure paths, Upwind’s pitch is straightforward: AI should help humans reason faster, not replace reasoning altogether.


“Choppy AI is designed to make cloud security exploration and investigation faster and more intuitive while making the logic behind it clear and auditable,” said Amiram Shachar, co-founder and CEO of Upwind. “By translating natural-language intent into structured, editable queries and rules, we’re giving teams the speed of AI with the transparency and control they need to trust the outcome. Every AI-generated result is visible, editable, and enforceable. Teams can see exactly how logic is constructed and apply it with confidence in production environments.”

From prompts to policies—without the magic trick


Instead of acting as a conversational oracle, Choppy AI functions more like a translator between human intent and machine-enforceable logic. Security teams can type what they want to find or enforce—using plain language—and see that request converted into explicit queries, rules, or investigation paths inside the platform.


Inventory searches become structured expressions that can be inspected and refined. Policy definitions written in English are transformed into transparent detection logic for misconfigurations or risky relationships. In vulnerability management, Choppy AI introduces a conversational mode that allows analysts to ask follow-up questions and drill deeper, with every answer tied back to real assets, runtime exposure, and contextual data.


The key distinction is that nothing stays hidden. Every AI-generated output can be reviewed, modified, reused, and ultimately enforced using the same mechanisms teams already rely on.


AI that augments, not overrides


Upwind’s approach reflects a growing skepticism among security leaders toward autonomous AI systems that operate outside established workflows. Rather than replacing existing tools or introducing a parallel decision engine, Choppy AI is designed to sit directly on top of Upwind’s runtime-aware data model.


That grounding matters. Because responses are derived from live assets, relationships, and execution context, investigations are less about hypothetical risk and more about what is actually reachable and exploitable in production. The result is AI-assisted analysis that accelerates prioritization without introducing new blind spots.


Built-in oversight for the AI itself


Recognizing that trust doesn’t end with explainable outputs, Upwind says each Choppy AI capability is instrumented with monitoring to observe usage patterns, prompts, and behavior in the real world. That telemetry is used to refine the experience over time while keeping AI actions predictable and aligned with security team expectations.


In an industry increasingly wary of “agentic” systems making silent decisions, this emphasis on observability positions Choppy AI less as an autonomous actor and more as a force multiplier for human judgment.


A signal of where cloud security AI is headed


Choppy AI builds on Upwind’s broader push toward runtime-first, AI-native cloud security, including its earlier Inside-Out AI Security initiatives. But more broadly, it reflects a shift underway across the security market: away from opaque AI answers, and toward systems that show their reasoning as clearly as their results.


As cloud complexity continues to outpace human capacity, the winning AI tools may not be the ones that promise to think for security teams—but the ones that let teams think faster, with fewer assumptions hidden behind the curtain.

bottom of page