Microsoft has launched a legal battle against a shadowy group accused of tampering with its Azure OpenAI Service, which powers the popular AI tool ChatGPT. The tech giant filed a lawsuit in December 2024 in the US District Court for the Eastern District of Virginia against ten unidentified defendants. The suit alleges violations of the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal racketeering laws.
The complaint details how Microsoft's servers were compromised to facilitate the creation of what the company describes as "offensive," "harmful and illicit content." Specifics of the content were not disclosed, but the severity of the allegations prompted Microsoft to shut down a Github repository and seize a website linked to the defendants.
According to the lawsuit, Microsoft first detected misuse of the Azure OpenAI Service API keys in July 2024. These keys, which authenticate users, were reportedly stolen from legitimate customers, leading to unauthorized production of illicit content. "The precise manner in which Defendants obtained all of the API Keys used to carry out the misconduct described in this Complaint is unknown, but it appears that Defendants have engaged in a pattern of systematic API Key theft that enabled them to steal Microsoft API Keys from multiple Microsoft customers," the complaint states.
The defendants are accused of developing a tool named de3u, designed to steal these API keys and facilitate communication with Microsoft servers. This tool allegedly bypassed Azure OpenAI Services’ built-in content filters and revised user prompts, allowing tools like DALL-E to generate images that would normally be restricted.
Katie Paxton-Fear, Principal Security Researcher at Traceable AI, highlighted the unique nature of this attack: "Unlike in other API attacks, where an attacker often targets business critical data and running, in this situation we have the attackers setting up a shadow AI. This worked by providing a DALL-E like front end, which then sent user's prompts to OpenAI via Azure. The attackers would then check if it had been censored to enable users to bypass the safety checks in the DALLE front end on OpenAI's website. By using legitimate OpenAI credentials for other users and businesses stolen in other attacks, they were able to go unnoticed moving their operations between many legitimate accounts.”
Microsoft claims that these illicit activities were part of a larger scheme to launch a hacking-as-a-service product. The company's swift legal and technical responses aim to mitigate any further abuse of its services and safeguard user data and integrity within its AI offerings. The case highlights the ongoing challenges and security concerns associated with managing and protecting AI and cloud services from sophisticated cyber threats.