top of page

Cycode steps into the AI-governance ring to squash “Shadow AI”

In a move highlighting the growing urgency around AI governance, crypto-secure application‐risk specialist Cycode today unveiled two new capabilities designed to tame what it calls “Shadow AI” — the unauthorized, unsupervised use of AI assets inside software development lifecycles.


The first: an AI & ML Inventory, which automatically scans an organization’s code repositories, pipelines and infrastructure, identifying all AI coding assistants, model servers, Model Context Protocol (MCP) servers and AI models. The second: an AI Bill of Materials (AIBOM), creating a structured manifest of all AI components in use, enabling audit-ready transparency.


Shadow AI: The new blind-spot for security teams


With the rapid acceleration of generative-AI tools inside engineering shops, many development teams are integrating AI assistants, model APIs and custom models without formal visibility from security or governance teams. This creates hidden risk paths: data leakage, compliance sins, untracked dependencies. As Cycode observes: “Generative AI security is … the #1 blind-spot reported by security professionals.”


Lior Levy, CEO and Co-founder of Cycode, put it succinctly:


“The AI coding revolution has created a massive blind spot for security teams. We were already battling an overwhelming tide of alerts, and now we face an invisible ecosystem of AI tools that is creating the next wave of risk.”“It’s no longer sufficient to just find vulnerabilities in AI-generated code. Organizations must have complete visibility and governance over the entire AI toolchain. This launch is a critical next step in our mission to secure AI development from prompt to production. We are not just securing the output; we’re empowering organizations with the hindsight and control to build a resilient, security-first culture from the inside out.”

How the two new capabilities work


  • The AI & ML Inventory uses Cycode’s Risk Intelligence Graph (RIG) and connects across code and runtime to discover where AI assets live, trace them back to repositories and surface shadow AI usage.


  • The governance layer enables security teams to define policies — e.g., approved AI technologies and model versions — and flag any deviations (for instance, an unauthorized AI assistant plugin or an unapproved model).


  • The AIBOM collects a real-time manifest of all AI components (models, services, infrastructure) in use — a requirement increasingly called for by regulators, enterprises and auditors.


Why it matters — for enterprises and developers


In an era where every developer is using AI tools to accelerate coding, the security perimeter has expanded beyond human-written code to machine-generated code, model calls, prompt chains and inference pipelines. Traditional AppSec tooling — static code scanners, dependency analyzers, CI/CD checks — weren’t built to detect whether a developer pulled in a new AI model, spun up a third-party inference API, or let a prompt leak credentials. As Cycode notes:


“GenAI creates invisible risks that traditional AppSec tools miss, including dynamic, unreviewed code and limited logging.”

This means enterprises face a paradox: accelerate innovation with AI or enforce rigid controls — and too often they have neither full speed nor full security. With visibility into AI usage across the SDLC, the hope is you can have both.


Where this fits in the broader platform


Cycode positions these new offerings as integral to its “AI-Native Application Security Platform” — a unified stack that converges Application Security Testing (AST), Software Supply Chain Security (SSCS) and Application Security Posture Management (ASPM).


They build on features such as securing AI-generated code (via its MCP server), AI-driven risk scoring and even automated remediation of vulnerabilities. The new inventory and AIBOM capabilities aim to fill a gap: discovering the “toolchain” layer of AI usage that sits above code and models.


Challenges and the path ahead


While the announcement is timely, a few major challenges lie ahead:


  • Discovery completeness: Identifying every AI asset (coding assistants, model endpoints, internal model versions, shadow infrastructure) in large orgs remains extremely hard.


  • Policy enforcement: Defining what is “approved AI use” vs “shadow AI” will vary wildly by org; enforcement may slow innovation if not carefully calibrated.


  • Compliance and audit readiness: Generative AI governance is still evolving (standards, regulations, best-practices), so the tools must keep pace.


  • Developer buy-in: Developers often adopt AI tools for speed — if new controls slow them down, they may find workarounds, recreating “shadow AI” in new form.


But the signal here is clear: the era where security teams can ignore AI assets is ending. Organizations now need both visibility (what AI tools/models/infrastructure are in use?) and governance (are they approved, safe, compliant?). The new features from Cycode are an explicit bet on that.


Bottom line


In the accelerating AI-driven software era, the hidden layer of AI tooling — the “shadow chain” of assistants, models and infrastructure — has emerged as a core risk vector. Cycode’s launch of an AI & ML Inventory combined with an AI Bill of Materials reflects an industry pivot: from securing code to securing the entire AI-enabled software factory. As enterprises grapple with speed, agility and now AI governance, tools like this will increasingly be seen not as optional but essential.

bottom of page