Server room with rows of illuminated database servers

You told Cursor to build you a SaaS app with user accounts and a database. Twenty minutes later, it works. Users can sign up, log in, and store data. You ship it.

What you probably did not see is that the AI put your database password in a client-side config file. Or that every API route queries the database by concatenating user input directly into SQL strings. Or that any authenticated user can read any other user's data by changing a single number in the URL.

These are not hypothetical edge cases. Research from early 2026 shows that AI-generated code fails secure database handling checks roughly 20% of the time. One in five database interactions contains a vulnerability that a competent attacker can exploit.

20%
DB code fails security checks
41%
Overly broad permissions
35
AI-code CVEs in March 2026

The Five Database Security Holes AI Tools Create

After auditing dozens of vibe-coded applications at Diffian, we see the same five database security failures appear again and again. Each one is a direct result of how AI coding assistants generate database code by default.

1. SQL Injection via String Concatenation

This is the oldest vulnerability in web development, and AI tools still produce it routinely. When you ask an AI to build a search feature or a user lookup, it will often concatenate user input directly into the query string rather than using parameterised queries.

Vulnerable // AI-generated code — looks clean, works fine, completely exploitable
const user = await db.query(
  `SELECT * FROM users WHERE email = '${req.body.email}'`
);

An attacker can type ' OR '1'='1 into the email field and dump your entire users table. This is not sophisticated. It is the first thing any penetration tester tries. The fix is straightforward — use parameterised queries — but AI tools do not apply it consistently.

Secure // Parameterised query — user input never touches the SQL string
const user = await db.query(
  'SELECT * FROM users WHERE email = $1',
  [req.body.email]
);

2. Connection Strings in Source Code

AI coding tools generate database connection strings inline. They will put your PostgreSQL URL — complete with username, password, host, and database name — directly in your application code. If that code ends up in a public GitHub repository, or if an attacker gains read access to your deployment, they have full database access.

We have seen Supabase keys, PlanetScale credentials, and Firebase admin tokens hardcoded into frontend JavaScript files. Not backend configs. Frontend files. Anyone with a browser's developer tools can see them.

3. Missing Row-Level Access Controls

This is perhaps the most dangerous pattern because it is invisible during normal testing. AI tools generate API endpoints that fetch data by ID — /api/users/123 returns user 123's data. The problem: AI rarely adds a check to verify that the currently authenticated user is user 123.

Any logged-in user can change that ID to 124, 125, or iterate through every record in your database. This is called an Insecure Direct Object Reference (IDOR), and it is the vulnerability behind some of the largest data breaches in recent years. AI tools almost never implement row-level security by default.

4. Over-Privileged Database Users

When AI generates your database setup, it typically connects your application using a database user with full administrative privileges. That single connection can create tables, drop tables, read every schema, and modify any row in any table.

In a production application, your web server should connect with the minimum permissions it needs — typically read and write to specific tables, nothing more. If an attacker exploits any vulnerability in your app (even a minor one), the impact is contained. With admin-level access, a single vulnerability gives them everything.

Research from 2026 indicates that 41% of AI-generated backend code ships with overly broad permission settings, dramatically expanding the blast radius of any breach.

5. No Database Migration Strategy

AI tools generate schema changes inline. They modify tables directly in application code. There is no migration history, no rollback capability, and no way to know what the database schema looked like last week versus today.

This becomes a security problem when you need to patch a vulnerability. If you discover that a column is storing sensitive data unencrypted, you need a migration to encrypt it. Without a migration system, you are writing raw SQL against production with no safety net and no audit trail.

The Real-World Impact

These are not theoretical risks. The number of CVEs directly attributed to AI-generated code jumped from 6 in January 2026 to 35 in March 2026. The trend is accelerating as more non-technical founders ship vibe-coded applications to production without security review.

The uncomfortable truth is that AI tools optimise for making things work, not for making things safe. A database query that returns the right data is a success in the AI's view, regardless of whether it can be exploited.

When a breach happens, the cost is not just technical. GDPR fines can reach 4% of annual turnover. Customer trust, once lost to a data breach, rarely returns. And the reputational damage is permanent — your company name becomes a cautionary tale in security blogs.

How to Fix It: A Practical Checklist

You do not need to rewrite your application from scratch. Most database security issues can be addressed in a focused security sprint. Here is what to check and fix, in priority order.

Step 1: Audit Every Database Query

Search your codebase for any string that includes a SQL keyword (SELECT, INSERT, UPDATE, DELETE) combined with template literals or string concatenation. Every instance needs to be converted to parameterised queries or an ORM that handles parameterisation automatically.

Step 2: Move All Credentials to Environment Variables

Every database connection string, API key, and secret token should live in environment variables, never in source code. Use a secrets manager like AWS Secrets Manager, Doppler, or even a simple .env file that is excluded from version control. Rotate any credential that has ever been committed to a Git repository — treat it as compromised.

Step 3: Implement Row-Level Access Controls

Every API endpoint that returns user-specific data needs a check: does the authenticated user have permission to access this specific record? This can be implemented at the application level (middleware that verifies ownership) or at the database level (Postgres Row-Level Security policies).

Step 4: Create a Least-Privilege Database User

Create a new database user for your application with only the specific permissions it needs. Typically this means SELECT, INSERT, UPDATE, and DELETE on specific tables — no CREATE, DROP, or ALTER permissions. Your migration tool can use a separate, more privileged user.

Step 5: Set Up a Migration System

Adopt a migration tool appropriate for your stack — Prisma Migrate, Drizzle Kit, Knex migrations, or Flyway for Java applications. Retroactively create an initial migration from your current schema, then make all future changes through migrations. This gives you version history, rollback capability, and an audit trail.

Why AI Cannot Fix This For You

You might be wondering: can you just ask the AI to secure the database code? Partially. The self-reflection approach — generating code, then asking the AI to review it for security issues — catches some problems. But research shows that AI models have systematic blind spots around context-dependent vulnerabilities like IDOR and privilege escalation.

An AI can spot an obvious SQL injection. It struggles with the question of whether user 123 should be able to see user 124's data, because that depends on your application's business logic, permission model, and data sensitivity — context that the AI does not have.

This is the gap that professional security review fills. Not replacing the AI, but providing the contextual judgement that the AI lacks.

The Bottom Line

Your vibe-coded app probably has database security issues. That is not a failure — it is the predictable result of using tools that prioritise speed over security. The question is whether you find those issues before an attacker does.

A focused database security audit takes hours, not weeks. The cost of not doing it is measured in breached user data, regulatory fines, and destroyed trust. If your app stores any user data at all — emails, passwords, payment information, personal details — database security is not optional. It is the minimum.

MH
Mark Hayward
Founder & Lead Engineer

Mark founded Diffian to bridge the gap between AI-assisted development and production-grade engineering. With experience shipping secure, scalable systems across startups and enterprises, he leads Diffian's security hardening and infrastructure practice from Cardiff, Wales.