top of page

AI’s Trust Gap: Why Information Readiness Could Make or Break Enterprise Adoption

Artificial intelligence is no longer a fringe experiment in the enterprise—it’s a boardroom mandate. Yet a new global survey from OpenText and the Ponemon Institute suggests that while corporate leaders are racing to deploy AI, the foundations beneath those ambitions remain shaky. The study, which polled nearly 1,900 senior IT and security executives across six countries, finds that most organizations still lack the information readiness to deploy AI securely, responsibly, or at scale.


The Missing Link Between AI and Trust


Shannon Bell, Chief Digital Officer at OpenText, put it bluntly: “This research confirms what we’re hearing from CIOs every day. AI is mission-critical, but most organizations aren’t ready to support it. Without trusted, well-governed information, AI can’t deliver on its promise.”


The data backs her up. Nearly three-quarters of respondents said reducing information complexity is essential to enabling AI, with unstructured data flagged as the biggest roadblock. Yet despite the urgency, fewer than half expressed confidence in their ability to measure the ROI of securing and managing information assets—a striking gap in execution for a technology that executives claim is their top priority.


The AI Paradox


The report highlights a paradox that’s become familiar in boardrooms: optimism about AI’s payoff colliding with organizational unreadiness.


  • 57% of respondents say AI adoption is a top priority.


  • 54% are confident they can demonstrate ROI from AI initiatives.


  • But 53% admit it is “very difficult” or “extremely difficult” to reduce AI’s security and legal risks.


Perhaps most telling, fewer than half (47%) say IT and security goals are aligned with their company’s AI strategy. In other words, the very teams responsible for securing and governing AI are often disconnected from the executives driving its adoption.


GenAI vs. Agentic AI


Enterprises are leaning hardest into generative AI. One-third have already deployed GenAI tools, with another quarter planning to within six months. Security operations, employee productivity, and software development were the top cited use cases.


Agentic AI—the class of systems capable of autonomous decision-making—lags far behind. Only 19% of organizations have adopted agentic AI, with another 16% planning to by early 2026. Just 31% view it as highly important to their strategy, a cautious posture that underscores the heightened concerns around risk and governance.


Building Information Resilience


The survey didn’t just diagnose the problem; it offered a roadmap for best practices from early adopters. Key steps include:


  • Protecting sensitive data exposure with robust access controls, data classification, and anomaly detection.


  • Embedding responsible AI practices, from data cleansing and bias testing to employee training.


  • Applying encryption everywhere—in storage, in transit, and during AI processing.

These measures reflect a growing recognition that AI is only as secure and ethical as the data that powers it.


Why It Matters


AI’s rise is testing the limits of enterprise information governance. Companies are eager to capture competitive advantage from GenAI and automation, but if they fail to tame their data sprawl, they risk embedding bias, exposing sensitive assets, and eroding trust.


Or as Bell put it: “At OpenText, we’re helping IT and security leaders close that gap by simplifying information complexity, strengthening governance, and ensuring the right information is secure and actionable across the enterprise.”


In short: the race to AI is not just about algorithms and models—it’s about building an information foundation strong enough to carry them.

bottom of page