Zenity Plugs the Security Gap in Microsoft’s Booming Agentic AI Ecosystem
- Cyber Jill

- Nov 18, 2025
- 3 min read
As enterprises rush to operationalize agentic AI across their workflows, one truth keeps getting louder: the more powerful the agents, the more catastrophic the mistakes when safeguards fail. With Microsoft Foundry and Copilot Studio turning enterprise ideas into fully orchestrated AI workers, the demand for real-time guardrails has turned into a full-on requirement.
Zenity — a fast-rising platform focused on securing and governing AI agents — thinks it has the answer. The company unveiled an inline prevention system built specifically for Microsoft Foundry, designed to give enterprises the kind of runtime enforcement they’ve been begging for. At the same time, Zenity’s deeper integration with Microsoft Copilot Studio is officially hitting general availability, making the company one of the few offering end-to-end oversight across the whole Microsoft agentic stack.
Microsoft’s Agent Factory Is Getting More Powerful — and Riskier
Foundry is evolving at breakneck speed. It’s quickly becoming the place where organizations stitch together models, data sources, custom tools, and third-party systems to build AI agents that behave less like chatbots and more like digital teammates.
But that freedom comes with a dangerous amount of ambiguity. Agent behaviors can shift. Tool access can expand unintentionally. And subtle prompt manipulations can spiral into major data loss. Zenity’s answer is to bring “hard boundaries” to the table — deterministic rules that explicitly stop agents from crossing the line, whether during development or mid-execution.
“Securing AI requires understanding of the intent and the full context of the agent,” said Michael Bargury, CTO and Co-Founder of Zenity. “Our work with Microsoft brings agent-centric security directly into Foundry Control Plane... enabling organizations to implement hard boundaries when adopting AI agents at scale.”
Inline Defense for a New Class of AI Risks
Zenity’s new capabilities lean heavily on real-time enforcement — not suggestions, not heuristics, but actual stops. Their system sits in the execution path of every agent action, cutting off anything that violates policy or exposes sensitive data. Think of it as runtime “circuit breakers” for AI behavior.
The platform layers in deeper tools too:
Deterministic runtime controls that intercept unsafe actions before they launch
Lifecycle-wide analysis of every tool, data flow, and execution path an agent touches
Built-in protection against indirect prompt injection, data exfiltration, and rogue tool use
Tight integration across Microsoft’s development ecosystem, giving security teams a single policy spine across Foundry and Copilot Studio
Microsoft Wants Guardrails as Much as Customers Do
Microsoft, for its part, has been vocal about the importance of safety infrastructure as AI agents get more capable.
“Zenity’s integration gives companies real-time control and visibility over the AI agents built with Microsoft Foundry and Copilot Studio,” said Sarah Bird, Chief Product Officer of Responsible AI at Microsoft. “This lets security teams support innovation… while keeping data safe and meet compliance needs, helping companies create trustworthy AI agents at scale.”
Governance Is No Longer Optional
The shift toward agentic AI is accelerating — and it’s changing how enterprises think about security architecture. Instead of securing static applications, teams now have to secure autonomous agents that behave dynamically, interact with sensitive tools, and make decisions in real time.
Zenity’s pitch is that runtime enforcement and build-time posture checks should become table stakes. Without them? You’re essentially letting autonomous software negotiate its own boundaries.
With Copilot Studio protections now generally available and Foundry-focused controls entering preview soon, Zenity is positioning itself as the de facto security layer for enterprises embracing Microsoft’s agentic future.
If the market keeps moving this fast — and all signs say it will — agentic AI won’t just need smarter models. It’ll need smarter guardrails. Zenity is betting that’s where the real battle for trust will be won.


