ten×
0
← back to blog
170 Apps Leaked User Data. They Were All Built the Same Way.

170 Apps Leaked User Data. They Were All Built the Same Way.

2026-03-19·4 min read·aisecurityengineering

Lovable CVE. Moltbook API keys. AlterSquare's audit. The same three security holes in every vibe-coded app. This isn't anti-AI. It's pro-engineering.


In early 2026, 170+ applications built on Lovable — one of the most popular AI app builders — were found to be leaking user data through a single vulnerability. CVE-2025-48757. CVSS score: 8.26 (High). Every app had the same pattern. Every app had the same three holes.

This isn't an isolated incident. It's the pattern.

The same three holes

Every security audit of vibe-coded applications finds the same problems. Not exotic zero-days. Not sophisticated attack vectors. The basics:

1. Missing input validation. AI-generated code trusts user input by default. It doesn't add sanitization unless you explicitly prompt for it — and even then, it misses edge cases. SQL injection, XSS, path traversal — the OWASP Top 10 from 2003 is back because the code generators don't think adversarially.

2. Exposed secrets and keys. Moltbook, built by a zero-code founder using AI tools, leaked 1.5 million API keys in its first 3 days. The AI put credentials in client-side code, committed .env files, and created API endpoints with no authentication. It didn't know not to — nobody prompted it to care about secrets management.

3. Broken access control. The Lovable CVE was fundamentally an access control failure. User A could see User B's data. The AI generated CRUD operations that worked — they read and wrote data correctly — but never implemented row-level security. Every user could access every row. The demo worked perfectly because the demo only had one user.

Why AI code is 2.74x more vulnerable

CodeRabbit analyzed 470 pull requests comparing AI-generated and human-written code. The findings:

  • 2.74x more security vulnerabilities in AI-generated code
  • 8x more performance issues
  • Consistent patterns: missing validation, improper error handling, insecure defaults

This isn't because AI writes "bad" code. The code runs. The syntax is clean. The logic is correct for the happy path. The problem is what's missing — the defensive engineering that experienced developers add by instinct.

A senior engineer writes authentication middleware and immediately thinks: "What if the token is expired? What if it's malformed? What if someone replays it? What if the signing key rotates?" The AI writes middleware that validates the token. Full stop.

Amazon's Sev-1 week

Amazon experienced 4 Sev-1 outages in a single week from AI-generated code changes. These weren't experimental features. They were production changes that passed automated code review — because the review tools were also AI-powered, and they shared the same blind spots as the generators.

When your code generator and your code reviewer both miss the same class of bugs, you don't have a safety net. You have the illusion of one.

This isn't anti-AI

This is pro-engineering.

AI code generation is the most powerful productivity tool in the history of software development. Using it without engineering discipline is like handing a teenager the keys to a Formula 1 car. The car is incredible. The outcome is predictable.

The 70% that AI generates is valuable. It saves weeks of boilerplate, scaffolding, and integration work. The mistake is assuming that the remaining 30% is more of the same — more code, more features, more prompting.

The last 30% is a different discipline entirely:

  • Security review by someone who thinks adversarially, not generatively
  • Evaluation pipelines that test failure modes, not just happy paths
  • Access control audits that verify row-level security, not just endpoint auth
  • Secrets management that prevents the next 1.5M key leak
  • Performance testing under load, not in a demo environment with 3 users

The production hardening gap

Every week, teams ship AI-generated prototypes to production without this engineering layer. Most get lucky — their app is small enough, their users few enough, their data innocuous enough that the holes don't matter yet.

Some don't get lucky. 170 apps worth of users found out the hard way.

If you've built with AI tools and you're about to ship to production, the cheapest insurance is a security audit before launch — not an incident response after. That's exactly what production hardening means: taking the 70% that AI gave you and engineering the 30% that protects your users, your data, and your business.