top of page

PointGuard AI Expands Platform to Tackle Hidden AI Threats Lurking in Code Repositories

As organizations scramble to secure the AI technologies reshaping everything from customer service to national defense, PointGuard AI is rolling out what it calls the “industry’s first full-stack AI security platform” — aiming squarely at a growing blind spot: the AI systems quietly embedded in source code repositories and MLOps pipelines.


Unveiled today at Black Hat USA 2025, PointGuard’s latest platform update reaches deeper than ever into the AI software supply chain. The company now offers automated discovery and threat correlation across not just cloud-hosted models and data lakes, but also the often-overlooked heart of enterprise development — GitHub and similar repositories.


The expansion comes as concern mounts over rogue AI assets and the invisible risks they carry. While enterprises adopt AI at breakneck speed, many security teams remain unaware of what models, agents, prompts, or datasets are actually in use — let alone whether they’re vulnerable or misconfigured.


“Our goal is to enable AI innovation — not hinder it,” said Pravin Kothari, Founder and CEO of PointGuard AI. “But that requires end-to-end security, and governance across the AI lifecycle. By extending our powerful discovery and correlation engine to code repositories, we are exposing and securing AI tools before they become embedded in enterprise applications.”


AI Sprawl Meets Security Vacuum


AI development has largely outpaced the frameworks built to secure it. Notebooks full of proprietary prompts, agents tied to unsecured APIs, and datasets scraped from risky sources are being committed to codebases every day — often without oversight.


PointGuard’s platform now scans repositories for telltale AI components: everything from pretrained models and custom datasets to AI-centric API calls, libraries, and even hidden agent scripts. These assets are then correlated across runtime environments, cloud platforms like AWS and GCP, and MLOps tools like Databricks and Azure ML.


This kind of cross-environment correlation is key to stopping attacks like prompt injection — a technique where adversaries manipulate an AI model’s input to trick it into executing harmful instructions. There have already been incidents where AI agents, exploited through malicious prompts, were convinced to delete infrastructure or leak sensitive credentials.


Governance From Dev to Deploy


PointGuard’s expanded visibility isn’t just about detection. It’s directly tied into active defense mechanisms: runtime guardrails, policy enforcement, and AI red teaming. The platform can flag illicit prompts buried in code, detect libraries with known vulnerabilities, and enforce governance policies before insecure AI code ever reaches production.


The need for such controls is becoming urgent. According to a recent IBM study, 13% of organizations have already suffered security breaches tied to AI systems — and 97% of those lacked even rudimentary access controls. With open-source models and agents proliferating across orgs, it’s often unclear who owns what — and who’s watching.


By integrating codebase visibility with its broader AI security posture management, PointGuard is positioning itself not just as a security vendor, but as a bridge between DevOps, data science, and cybersecurity teams — groups that historically haven’t spoken the same language.


The Next Frontier of AI Security


As enterprises adopt generative AI tools across the stack — and attackers evolve to target them — the traditional security playbook no longer applies. PointGuard’s move reflects a broader shift: securing AI isn’t just about defending algorithms. It’s about understanding where those algorithms live, how they interact, and what they’re being asked to do.


“AI is now embedded in the DNA of software development,” said Kothari. “If we don’t have visibility into the tools developers are building with — and how they’re being used — we’ll lose control before we even realize the risks.”


With this update, PointGuard is betting that security leaders will prioritize visibility over convenience — and that uncovering hidden AI threats in their own code may be the wake-up call they didn’t know they needed.

bottom of page