top of page

Leaked xAI API Key by Government Contractor Sparks Alarms Over Broader Data Security Risks

A special government employee with access to highly sensitive U.S. federal systems has unintentionally leaked a private API key to Elon Musk’s xAI platform, prompting concerns not only about AI model misuse, but about systemic vulnerabilities in how government contractors handle sensitive data.


The staffer, Marko Elez, whose recent projects have involved privileged access to systems at the U.S. Treasury Department, Social Security Administration, Department of Homeland Security, and Department of Justice, published code to his personal GitHub account that contained an unrevoked API key. That key reportedly provided access to several proprietary models from xAI, including Grok—Musk’s flagship AI assistant integrated across X (formerly Twitter).


The incident was first reported by security researcher Brian Krebs and confirmed by Philippe Caturegli, founder of French cybersecurity consultancy Seralys, who discovered the exposed credential and privately alerted Elez. Although the key was promptly removed from GitHub, it had remained active, leaving the door wide open for misuse.


“If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors,” Caturegli told KrebsOnSecurity.


The case has ignited debate across the cybersecurity community about how even technically vetted insiders can pose security risks—not through malice, but through missteps.


“This is a textbook example of something we talk about often: even highly trusted, technically trained individuals can make mistakes that lead to serious exposure,” said Randolph Barr, Chief Information Security Officer at Cequence. “What’s worth stressing here is that this kind of issue is completely detectable with the right combination of tools and processes.”


Barr points to widely available scanning tools like GitGuardian, TruffleHog, and Gitleaks that can automatically flag exposed secrets in code repositories. Secrets managers—like HashiCorp Vault or AWS Secrets Manager—serve to eliminate the need to embed credentials in code at all. But according to Barr, tooling is only one side of the equation.


“Detection only works if you have the right tools in place, they’re properly configured, and there’s a clear and responsive process to handle alerts when something goes wrong,” he said. “In this case, either the detection tooling wasn’t there, or alerts didn’t get escalated—or worse, were ignored. It's a good reminder that tech alone isn’t enough. Without a strong feedback loop between detection and response, these kinds of exposures slip through.”


While the story has centered around the leaked xAI API key—likely to be revoked by the company following media coverage—many experts are less concerned about unauthorized Grok queries and more alarmed about the broader trust this contractor had within critical U.S. infrastructure.


“The staffer reportedly had access to government databases at SSA, Treasury, DOJ, and DHS—so the potential implications go well beyond just xAI,” Barr emphasized.


The Office of the Inspector General and the Cybersecurity and Infrastructure Security Agency (CISA) have yet to comment on whether a formal investigation is underway, and it’s unclear whether Elez will retain his current security clearances.


Still, the breach underscores the fragility of secrets management in environments where contractors operate with sweeping access—and serves as a cautionary tale as AI tools increasingly intersect with government workflows.


“This wasn’t a sophisticated APT or a supply chain zero-day,” one cybersecurity analyst remarked off-record. “This was a developer posting a config file with an API key—and it could’ve just as easily been credentials to a Treasury database.”

bottom of page