AI Chat App Leak Exposes Hundreds of Millions of Private Conversations, Researcher Says
- Cyber Jack
- 14 minutes ago
- 3 min read
A widely downloaded mobile app called Chat & Ask AI has exposed what may be one of the largest known leaks of private AI chatbot conversations to date. An independent security researcher says the app left hundreds of millions of user messages publicly accessible online, including deeply sensitive and, in some cases, dangerous requests.
Chat & Ask AI has surpassed 50 million installs across the Google Play Store and Apple App Store, positioning it as one of the most popular third-party AI chat apps on mobile. According to the researcher, the scale of exposed data was staggering. Roughly 300 million messages tied to more than 25 million users were accessible due to a backend configuration error.
The leaked conversations were not abstract prompts or anonymous test inputs. They were complete chat histories connected to individual users over time. In those messages, users asked questions about suicide, self-harm, illegal drug manufacturing, hacking techniques, and other intensely personal topics. Many appeared to treat the AI as a private therapist, journal, or confidant.
A familiar cloud mistake with massive consequences
The exposure was discovered by a researcher who goes by the name Harry. He traced the issue to a misconfigured Google Firebase backend, a cloud platform commonly used to store app data. Because of the way authentication rules were set up, outsiders could gain access to the database without meaningful barriers.
Harry says he verified the scope of the issue by analyzing a smaller sample of approximately one million messages belonging to around 60,000 users. That review confirmed that full chat logs were exposed at scale.
The accessible data reportedly included full conversation transcripts, timestamps, the custom names users gave their chatbots, configuration settings, and even which underlying AI model was selected. For users who assumed their conversations were ephemeral or private, that detail makes the exposure far more serious.
How a wrapper app became a data honeypot
Chat & Ask AI does not run its own large language model. Instead, it acts as a front end that allows users to interact with models built by major AI providers. Users could select between popular systems such as ChatGPT, Claude, and Gemini. While the core AI models are operated by their respective companies, the app itself handled message storage, logging, and user configuration.
That design choice placed responsibility for data security squarely on the app developer. Cybersecurity professionals note that Firebase misconfigurations are a common source of breaches, especially in fast-growing mobile apps that prioritize features and scale over hardened security controls.
In this case, the risk was amplified by the nature of the data being stored. AI chat logs are not like generic analytics or crash reports. They often contain raw thoughts, emotional confessions, and highly sensitive questions that users would never share publicly.
Why exposed AI chat logs are uniquely dangerous
For everyday users, the incident underscores a growing gap between expectations and reality in consumer AI tools. Many people assume AI chats are private by default, especially when the interface resembles a personal assistant or mental health aid.
In reality, when those conversations are stored insecurely, they become an attractive target for attackers. Even without explicit names attached, long chat histories can reveal identities, locations, workplaces, relationships, and mental health struggles. Once data like that is exposed, it can be copied indefinitely, resurfacing years later in ways users cannot control.
James Wickett, CEO of DryRun Security, says this kind of incident reflects a broader shift in application security as AI systems become deeply embedded in consumer products.
“Prompt injection, data leakage, and insecure output handling stop being academic once AI systems are wired into real products, because at that point the model becomes just another untrusted actor in the system. Inputs are tainted, outputs are tainted, and the application has to enforce boundaries explicitly rather than assuming good behavior. The recent AI chat app breach that exposed roughly 300 million private messages tied to 25 million users wasn’t a novel AI exploit, it was a familiar backend misconfiguration, made far more dangerous by the sensitivity of the data involved. This is the frontier of application security in 2026, where traditional appsec failures collide with AI systems at scale, and where most of the real risk is now concentrated.”
A warning for the AI app ecosystem
As AI chat apps proliferate across app stores, incidents like this raise uncomfortable questions about oversight, data retention, and user trust. Many wrapper apps collect and store far more information than users realize, often without clear disclosures about how long conversations are kept or how they are protected.
The Chat & Ask AI exposure highlights a simple but increasingly urgent reality. As AI tools become more personal, the cost of basic security failures rises dramatically. What once would have been a routine cloud misconfiguration can now spill millions of intimate human conversations onto the open internet.