top of page

Critical AI Tool Vulnerability CVE-2025-6514 Exposes LLM Clients to Full System Compromise

A newly disclosed critical vulnerability in a popular AI proxy tool has thrown a spotlight on the risks of trusting remote servers in the rapidly evolving ecosystem of AI application infrastructure. JFrog Security researchers have revealed CVE-2025-6514, a command injection flaw in mcp-remote—a widely adopted open-source proxy used by large language model (LLM) clients like Claude Desktop to communicate with external Model Context Protocol (MCP) servers.


The vulnerability, which scores 9.6 on the CVSS scale, allows attackers to trigger arbitrary command execution on a user’s machine simply by tricking the client into connecting to a malicious or compromised MCP server. On Windows, this leads to full shell command execution; on macOS and Linux, limited but still dangerous executable launches are possible.


This marks the first known instance of remote code execution (RCE) through an MCP client in the wild, making it a milestone moment in AI system security.


“While remote MCP servers are highly effective tools for expanding AI capabilities in managed environments... MCP users need to be mindful of only connecting to trusted MCP servers using secure connection methods such as HTTPS,” said Or Peles, Vulnerability Research Team Leader at JFrog. “Otherwise, vulnerabilities like CVE-2025-6514 are likely to hijack MCP clients in the ever-growing MCP ecosystem.”

A Proxy for AI Power—and Risk


mcp-remote functions as a translator between older LLM clients—designed to work with local MCP servers via standard input/output—and newer remote MCP infrastructures that rely on HTTP transport. This architecture allows developers to centralize and streamline their MCP server deployments, enabling multiple applications to share a single backend.


The convenience, however, comes at a cost. JFrog’s proof-of-concept shows how easily a remote attacker can craft a malicious authorization_endpoint URL to exploit the mcp-remote logic during OAuth authentication. By embedding commands like file:/c:/windows/system32/calc.exe or more complex PowerShell-encoded payloads, the malicious server causes the client to inadvertently execute code locally.


In one demo, researchers were able to silently create a file on the Windows host (c:\temp\pwned.txt) by chaining command injection through a fabricated MCP response. The payload evaded standard sanitization by exploiting how URL() constructors handle non-standard URI schemes and subexpression evaluation in PowerShell.


Who’s at Risk


Any user running mcp-remote versions 0.0.5 to 0.1.15 is vulnerable, particularly if they connect to:


  • A remote MCP server using HTTP rather than HTTPS, or


  • An untrusted or potentially hijacked server on a local network (common in developer or enterprise environments).


The fix, issued in version 0.1.16, patches the vulnerable logic. All users are urged to upgrade immediately and avoid plaintext HTTP connections.


A Symptom of a Growing Surface Area


The vulnerability raises broader concerns about the expanding surface area created by tools like MCP in the age of AI-native applications. Since MCP’s launch in late 2024, it has become the de facto protocol for allowing LLMs to securely interface with live APIs, databases, and third-party tools. What was once local and isolated is increasingly remote and dynamic—introducing classic web-era vulnerabilities into cutting-edge AI workflows.


And it’s not just hobbyists. According to JFrog, mcp-remote has been featured in integration guides from Cloudflare, Hugging Face, and Auth0. Its adoption in production environments is growing—and so is its attractiveness to attackers.


JFrog's findings also underscore a shift in how attackers may soon target AI infrastructure, not through the models themselves, but through the tooling around them: configuration files, proxy layers, transport mechanisms.


Looking Ahead


The quick fix by mcp-remote maintainer Glen Maddern is encouraging, and the open-source community has rallied to patch downstream integrations. But the larger challenge remains: securing the interfaces that bridge AI models to the outside world.


As more LLM hosts like Cursor, Windsurf, and Claude expand support for remote MCP connections, future attack vectors are almost inevitable.


For now, the advice is simple: upgrade immediately, use HTTPS exclusively, and think twice before pointing your AI assistant at an unknown backend. Because in the world of AI integration, convenience is increasingly a double-edged sword.

bottom of page