top of page

Lovable AI Coding Platform Faces Data Exposure Backlash After Permission Flaw Reveals User Projects and Chats

  • 4 hours ago
  • 3 min read

A growing dispute over data exposure at Swedish AI coding startup Lovable is raising fresh questions about how quickly emerging developer tools are scaling without fully locking down security controls.


The controversy began when an X user alleged that projects created on the platform prior to late 2025 were broadly accessible. According to the post, the individual was able to view other users’ application code, chat interactions with AI systems, and associated customer data using only a standard account. The claim also suggested that accounts tied to employees at major firms such as Nvidia, Microsoft, Uber, and Spotify were potentially impacted.


Lovable quickly pushed back, stating that the visibility of project code in some cases was an intentional product feature rather than a breach. The company said public project access was designed to help users discover what others were building on the platform. However, that explanation did little to calm concerns across the developer and security communities, particularly as users questioned whether sensitive data had been unintentionally exposed.


In a follow-up statement, the company clarified that it had already shifted its defaults in December, making projects private unless users explicitly chose otherwise. It also acknowledged a separate issue tied to backend permission changes earlier this year.


“Unfortunately, in February, while unifying permissions in our backend, we accidentally re-enabled access to chats on public projects,” Lovable said. “Upon learning this, we immediately reverted the change to make all public projects’ chats private again. We appreciate the researchers who uncovered this.”


That admission reframed the incident from a purely design decision to a combination of product choices and implementation errors.


Security experts say the distinction between a breach and a design flaw may be less important than the outcome. Tom Van de Wiele, founder of Hacker Minded, described the situation as a predictable failure in modern application security.


“This is another unfortunate example of lacking secure defaults and a failure to threat model for the automated and AI age,” he said. He added that relying on users to correctly interpret what is public versus private “always falls flat eventually.”


The incident highlights a broader tension shaping the AI coding boom. Tools designed to accelerate development often prioritize ease of use and rapid onboarding. That can come at the cost of clarity around data exposure and permission boundaries.


Van de Wiele noted that companies building these platforms face constant pressure to reduce friction for users while defending against scraping and abuse. But he emphasized that trade-offs cannot come at the expense of user safety, especially when enterprise data may be involved.


Moore warned that so-called “vibe coding” environments, where developers rely heavily on AI assistance, can amplify these risks if safeguards are not clearly enforced.


“If users can accidentally expose sensitive data through AI coding defaults, attackers don't need to hack anything at all,” he said.


The Lovable situation arrives amid a string of recent security incidents across the AI ecosystem. Anthropic recently disclosed an accidental leak involving internal files and source code, while Vercel reported unauthorized access to parts of its infrastructure tied to a third-party compromise involving Context.ai.


Vercel said it is continuing its investigation and has brought in external incident response experts, noting that law enforcement has been notified.


Industry leaders say these events point to a structural issue rather than isolated mistakes. As AI development platforms compress the time required to build and deploy applications, they also expand the attack surface at an equally rapid pace.


Ryan McCurdy, VP of Marketing at Liquibase, said the real risk extends beyond code quality.


“This incident is a reminder that the risk in AI-generated development is not just bad code. It is bad control design,” he said. “When application creation speeds up, permissions, secrets exposure, and database access paths can become part of the attack surface just as quickly.”


Vishal Agarwal, CTO of Averlon, pointed to the added danger of exposing AI chat histories alongside source code.


“It’s one thing to have access to the sauce. It’s another to have access to its recipe,” he said. “With inadvertent leakage of chat history, attackers gain access to reconnaissance information that can be leveraged to target the organization more precisely.”


For now, Lovable maintains that the issue has been addressed. But the episode underscores a growing reality in the AI era. In platforms where sharing, collaboration, and automation are core features, the line between intended visibility and unintended exposure is becoming harder to manage.


As enterprises increasingly adopt AI-assisted development tools, the burden is shifting toward vendors to ensure that secure defaults, clear permissions, and transparent communication are not optional features but foundational requirements.

bottom of page