JFrog Unleashes Secure AI-Powered Developer Workflows with New MCP Server Integration
- Cyber Jill

- Jul 21
- 2 min read
In a move that could reshape how developers harness AI across the software supply chain, JFrog has launched its new Model Context Protocol (MCP) Server—marking a bold step toward integrating secure AI-powered workflows directly into the developer experience.
The MCP Server acts as a secure bridge between large language models (LLMs), agentic AI tools, and JFrog’s widely adopted software platform. The result: developers can now query repositories, check for open-source vulnerabilities, and orchestrate complex DevOps tasks using natural language, all without leaving their coding environment.
“The developer tool stack and product architecture has fundamentally changed in the AI era,” said Yoav Landman, Co-Founder and CTO of JFrog. “With the launch of the JFrog MCP Server, we’re expanding the open integration capabilities of the JFrog Platform to seamlessly connect with LLMs and agentic tools.”
This shift is more than just convenience. It’s about accelerating velocity while locking down security in a world where AI is both an innovation engine and a new attack surface.
From Code Whisperers to Secure Agents
MCP—the open industry protocol designed to connect AI systems to external tools—has been gathering steam across the developer community. JFrog’s new implementation leverages this framework, enabling real-time queries like “What’s the build status?” or “Is this package safe to use?” to be answered by AI agents hooked directly into JFrog’s trusted infrastructure.
The server lives in the cloud, backed by automatic updates and production-grade monitoring, and relies on OAuth 2.1 for granular, user-scoped access. This isn’t just another AI chatbot tacked onto a legacy system—it’s agentic AI with enterprise-grade security.
The Hidden Threat of AI Pipelines
But JFrog isn’t blindly riding the AI hype wave. The company’s own security research team recently uncovered vulnerabilities such as CVE-2025-6514—an exploit that could let attackers hijack MCP clients and execute remote code. It’s a chilling reminder that as AI tools become more embedded in developer workflows, so do their risks.
JFrog’s solution? Lock down everything from the ground up.
The new MCP Server enforces HTTPS-only connections, delivers centralized logging, and provides full transparency into how AI agents interact with tools and packages.
“This allows developers to natively integrate their MCP-enabled AI tools and coding agents with our Platform,” Landman said. “Enabling self-service AI across the entire development lifecycle… helps increase productivity and build smarter, more secure applications faster.”
AI, Meet DevSecOps
Beyond its security posture, the MCP Server is about simplifying the complex. Tasks that once required deep knowledge of DevSecOps—like evaluating transitive dependencies or checking for known CVEs—are now a simple prompt away. Think of it as ChatGPT meets your CI/CD pipeline, with guardrails firmly in place.
Early access is already open to JFrog SaaS customers, with documentation, examples, and an AWS Marketplace listing available for those ready to test the waters.
JFrog’s MCP Server may just be the missing link between AI’s promise and practical, secure software development at scale. And with vulnerabilities evolving as fast as the tools to fix them, building AI-aware platforms that don’t compromise on security is no longer optional—it’s table stakes.


