top of page

WitnessAI Unveils New Tools to Pressure-Test and Protect Enterprise LLMs

After posting record sales last quarter, WitnessAI is doubling down on one of the thorniest challenges in the enterprise: keeping large language models secure. The company unveiled two new products this week, Witness Attack and Witness Protect, aimed at helping businesses both probe their AI systems for weaknesses and defend them in real time once deployed.

Automating the Red Team

Witness Attack functions as an automated red-teaming platform. Instead of relying solely on manual security assessments, the tool bombards models with adversarial prompts to uncover hidden vulnerabilities. Techniques include multimodal attacks, reinforcement-learning exploits, multi-step jailbreaks, and fuzzing across APIs. By generating synthetic attack traffic at scale, the system allows developers to identify weak points in advance, reducing the chance that malicious actors will discover them first.

Runtime AI Defense

Witness Protect takes a different tack. Described as a next-generation AI firewall, it continuously monitors model behavior for signs of manipulation. The platform enforces runtime safeguards such as prompt filtering, intent-based response control, and dynamic redaction of sensitive data. WitnessAI claims that the product can block more than 99 percent of prompt injection attempts, thanks to detection algorithms trained on two years of synthetic prompt and conversational attack data. The firewall is designed to operate across more than 100 different types of LLMs, offering a standardized layer of protection even in heterogeneous AI environments.

One Platform, Not Five

The combined release pushes WitnessAI further into the role of a one-stop shop for AI security and compliance. The company’s platform now spans the full lifecycle of enterprise AI—from safe model development, to compliant employee usage, to runtime defense for production apps and agents.

"Enterprises don't want to buy five different products to ensure their employees and customers can use AI safely," said Rick Caccia, WitnessAI CEO. "With the introduction of Witness Attack, enterprises can now ensure automated testing and hardening of their internally-developed models, apps, and agents. Witness Protect adds even better defenses against model attacks, and is already in customer evaluations to replace previously-deployed AI firewall solutions from legacy security providers."

The Bigger Picture

The timing is notable. Enterprises are experimenting with generative AI at a pace that often outstrips their ability to secure it, creating openings for prompt injection, data leakage, and regulatory missteps. By packaging proactive testing with defensive enforcement, WitnessAI is betting that CISOs and developers alike want integrated solutions that slot into their workflows without multiplying vendor contracts.

If adoption follows early evaluations, WitnessAI’s new releases could set a bar for how AI security is expected to be delivered—continuously, automatically, and across every model in play.

bottom of page