top of page

Tumeryk and DataKrypto Launch Encrypted Guardrails, Redefining AI Security for the Enterprise

In a move set to reshape the landscape of AI security and compliance, Tumeryk and DataKrypto have unveiled a joint solution that could become the new gold standard for protecting generative AI systems across highly regulated industries.


Dubbed Encrypted Guardrails for Operational Security, the integration blends DataKrypto’s continuous encryption technology with Tumeryk’s AI trust and governance suite, delivering end-to-end protection across every stage of AI deployment—from RAG pipelines to real-time prompt interactions.


The collaboration aims to eliminate one of the biggest blind spots in enterprise AI: the unsecured data flows and unmonitored model operations that remain vulnerable to injection attacks, exfiltration, and regulatory non-compliance.


“Encrypted Guardrails close the final mile of AI security,” said Rohit Valia, Founder & CEO of Tumeryk. “Organizations can now innovate with GenAI, confident that every token, from retrieval to response, is both policy-aligned and cryptographically protected. This is a game-changer for highly regulated sectors.”

The AI Security Gap


As adoption of large language models accelerates, organizations are navigating a delicate balancing act: deliver on the promise of AI while protecting customer data, complying with GDPR and HIPAA, and avoiding the pitfalls of ungoverned machine reasoning.


Existing AI guardrail tools typically operate post-hoc—monitoring model outputs and applying reactive filters. But according to both companies, that’s not enough.


The new joint solution intercepts threats earlier in the lifecycle by encrypting not just user inputs and model outputs, but also embeddings, model weights, tool-call payloads, and even policy rules themselves.


How It Works


At the core of the platform is DataKrypto’s FHEnom for AI, a full-stack encryption layer that enables computation on encrypted data, including real-time secure execution within hardware-isolated enclaves. This allows LLMs to function without ever decrypting sensitive context or parameters.


Meanwhile, Tumeryk’s Self-Calibrating Prompt Security operates directly on the encrypted inputs, leveraging enclave-based inspection to flag or block non-compliant, toxic, or permissive prompts before they ever reach the model.


Telemetry from these events feeds into Tumeryk’s AI Trust Controller, providing dashboards that map all activity to compliance frameworks like NIST AI RMF, ISO 42001, and PCI DSS—critical for audit readiness and internal governance.


“DataKrypto is proud to partner with Tumeryk to set a new standard for secure AI adoption,” said Ravi Srivatsav, CEO of DataKrypto. “This collaboration empowers organizations to deploy secure, scalable AI workloads—even on their most sensitive data—without ever compromising privacy or performance.”

A New Offering for MSPs


The partnership also introduces a compelling opportunity for Managed Services Providers. In a parallel announcement, Campoli Consulting revealed it will offer the Encrypted Guardrails platform as a managed service throughout EMEA and globally.


“With this solution, MSPs can offer AI as a managed service, ensuring continuous protection for all sensitive information and delivering greater security and trust to end customers, which is now a business and regulatory imperative,” said Paolo Campoli, Founder of Campoli Consulting.

Strategic Implications


As AI adoption continues its meteoric rise, enterprises are facing new categories of risk: model drift, data leakage, adversarial prompts, and regulatory uncertainty. Tumeryk and DataKrypto’s partnership isn’t just a new product—it’s an architectural shift that signals where secure AI is heading next.


Encrypted Guardrails is available now as both a managed SaaS platform and a self-hosted deployment for enterprises and MSPs. For those in healthcare, finance, defense, and other data-sensitive sectors, this may be the most comprehensive preemptive security model yet.


And if the broader AI market follows suit, post-hoc filters may soon be a thing of the past. The future of AI security, it seems, is encrypted from the inside out.

bottom of page