top of page

New Attack Vector Hits AI Tooling: ‘Prompt Hijacking’ Exploits MCP Session IDs

On October 20 2025, the security research team at JFrog Security Research published a disclosure of multiple vulnerabilities in the open-source package oatpp‑mcp—an implementation of the Model Context Protocol (MCP) standard produced by Anthropic. The most critical of these is logged as CVE‑2025‑6515, and JFrog’s researchers have coined the attack technique enabled by this flaw “Prompt Hijacking.”


What’s the Protocol Here—and Why It’s Vulnerable


MCP is designed to let large-language-model systems receive real-time structured context from applications, allowing an LLM inside e.g. an IDE or tool environment to “know” what the user is doing right now. In other words, MCP bridges the gap between what the model was trained on and what the user is actively working with.


In the disclosed scenario:


  • A host application acts as the “MCP Host.”


  • A client bridges the host to an MCP server.


  • The MCP server exposes “tools” (local files, remote APIs, etc) through JSON-RPC, using transports such as HTTP, SSE (Server-Sent Events), or STDIO.


  • In the specific vulnerable implementation (oatpp-mcp), the SSE endpoint assigns a session ID based on the memory address of the newly created session object. That is, the function -- in effect -- returns a pointer cast to string as a session ID.


Because that pointer isn’t globally unique, isn’t cryptographically random, and can be reused by memory allocator behaviour such as glibc’s malloc reuse of freed blocks, an attacker can race session creation/destruction, capture the pointer values, and later force a legitimate user to pick one of the same IDs. Once the attacker knows the session ID, they can impersonate that client’s session, inject requests, and get the legitimate user’s channel to receive malicious prompt responses. JFrog calls this Prompt Hijacking.


Why This Is a Big Deal (Even Though the Model Is Unchanged)


The surprising thing here: the flaw doesn’t require messing with the underlying model weights or breaking the LLM itself. Instead it exploits the protocol pseudostructure around the model—i.e., the connection channel. That means:


  • Agents or assistants that rely on MCP might be feeding the wrong context or wrong “tools” responses without knowing it.


  • Attackers could inject malicious prompts, cause the toolchain to call unintended APIs, or change the output of the user-facing assistant.


  • Because the model behaves as though it were receiving valid context, it may produce seemingly legitimate responses (which may include malicious payloads).


  • From a security architecture standpoint, this shows that “AI security” isn’t just about the model — it must include the protocols, tool-integration, session management, and context-streams.


As one summary put it:


“As AI models become increasingly embedded in workflows via protocols like MCP, they inherit new risks — this session-level exploit shows how the model itself remains untouched while the ecosystem around it is compromised.”

Case Study: CVE-2025-6515


The vulnerability applies when an application uses oatpp-mcp with the HTTP SSE transport (i.e., server.getSseController() in Oat++). The function returning the session ID simply does:


oatpp::String Session::getId() const {
  auto memId = reinterpret_cast(this);
  return oatpp::utils::Conversion::uint64ToStr(memId);}

That is, the pointer value of the Session object is used as the session ID. If the attacker repeatedly spawns/destroys sessions and records their pointer values, then a subsequent legitimate client may receive one of those pointer values (and thus a known session ID). The attacker then sends POST requests to that session ID’s URI (for example /message/{session_id}) and the server routes those responses to the victim’s open SSE connection. The client receives attacker-controlled data instead of or intermingled with its own responses.


From the CVE database:


  • CVE-2025-6515 shows “reuse of session IDs in oatpp-mcp leads to session hijacking and prompt hijacking by remote attackers.”


  • CVSS base score 6.8 (medium), vector indicates network attacker, high impact to integrity and availability, but no confidentiality loss.


Who Is At Risk?


Any application using oatpp-mcp with the vulnerable transport (HTTP SSE) and with network access that allows an attacker to reach the MCP server. In practice:


  • Developers embedding tooling using Oat++ + MCP.


  • Enterprises embedding agentic workflows where the model uses real-time context via MCP.


  • Service providers deploying MCP servers exposed to attacker-accessible networks (including internal attacker lateral movement).Mitigation requires the vulnerable transport and the known pointer-session-ID behaviour. If an application uses STDIO transport only, or uses another non-vulnerable library, risk may differ.


Defenses & What Organizations Should Do


JFrog’s disclosure and commentary urge the following hardening steps:


  • Servers must generate session IDs using cryptographically secure random generators (≥ 128 bits of entropy) rather than pointer values.


  • Clients should avoid naive event schemes (such as simple incrementing IDs) that allow “spraying” of messages until one is accepted. Instead use unpredictable identifiers and strictly validate expected event types.


  • Transports must enforce proper session separation, expiry, and reuse protection (e.g., avoid reassigning old session-IDs).


  • Organizations should audit the use of MCP-based integrations (especially OSS implementations like oatpp-mcp), check versions, and apply updates or mitigations.


  • More broadly: treat AI-tooling infrastructure with the same security mindset as web services, APIs and session-management layers. The weak link may not be the model but the plumbing.


Why This Could Reshape AI Security Mindset


Until now, much of AI-security discourse has focused on model poisoning, adversarial input, data-leakage from model weights, or prompt injection (i.e., feeding the model malicious textual instructions). What this case introduces is a protocol-level exploit in which the attacker doesn’t change the model or override user prompts strictly—but hijacks the channel that supplies contextual prompt and tool responses. In other words: the model isn’t aware, the user isn’t aware, but the attacker controls the stream.


In the rush to integrate assistants into dev-tools, IDEs, agentic workflows and real-time tool chains, this is perhaps a timely reminder: the tooling ecosystems around the model matter just as much as the model. A flaw in session routing, context transport, or tool invocation could yield dangerous outcomes—even without breaking the model itself. As one piece put it:


“This prompt hijacking attack is a perfect example of how a known web application problem, session hijacking, is showing up in a new and dangerous way in AI.”

Bottom Line


CVE-2025-6515 and the broader “Prompt Hijacking” class highlight that as ML/LLM systems move into real-time, context-aware, tool-driven workflows, the attack surface expands beyond the model into the integration machine. Enterprises and developers must treat MCP implementations, session-management and tool-transport stacks with the same diligence applied to web frameworks and API gateways.


If you’re using MCP (or planning to) or integrating a client/host architecture for AI assistants, now is the moment to audit your session-ID generation, check for vulnerable libraries (like oatpp-mcp), ensure you’re not using pointer-based session IDs, and harden your event/message flows.

bottom of page