Back to all posts

5 Security Holes AI-Generated Code Loves to Create

Security March 1, 2026 10 min read

I've audited dozens of AI-generated codebases over the past year. ChatGPT, Cursor, Copilot, Lovable, Bolt. The pattern is always the same: the app works, it looks good, and it has security holes you could drive a truck through.

AI tools optimize for "does it run?" not "is it safe?" That's fine for prototyping, but the moment real users are involved, these vulnerabilities become serious liabilities.

Here are the five I find most often, and how to fix each one.

1. Exposed API Keys and Secrets

This is the most common one by far. AI tools regularly hardcode API keys, database connection strings, and secret tokens directly in client-side code.

What it looks like:

// This is sitting in your React component
const supabaseKey = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...'
const stripeSecret = 'sk_live_51abc123...'

Anyone who opens the browser console can see these. If it's a Stripe secret key, they can issue refunds, create charges, or access your customer data.

How to fix it:

2. No Server-Side Validation

AI-generated apps love to validate inputs only on the client side. Form checks in React, required fields in HTML, frontend regex patterns. All of which can be bypassed in about 10 seconds.

The reality: Any data sent to your server can be modified. Someone can open the browser dev tools, change the request body, and send whatever they want.

How to fix it:

// Bad: trusting the price from the frontend
const { price } = req.body;
await createCharge(price);

// Good: looking up the price server-side
const product = await db.products.findById(req.body.productId);
await createCharge(product.price);

3. Broken Authentication and Session Handling

AI code often implements authentication in a way that technically works but is wildly insecure. Common patterns I see:

How to fix it:

4. Missing Row-Level Security

This one is subtle and dangerous. The app has authentication, so users can log in. But once they're logged in, they can access any other user's data by changing an ID in the URL or API request.

What it looks like:

// API route that fetches user profile
app.get('/api/users/:id', async (req, res) => {
  const user = await db.users.findById(req.params.id);
  res.json(user);
});

There's no check to make sure the logged-in user is requesting their own data. Anyone can fetch anyone else's profile, orders, payment info, whatever.

How to fix it:

5. SQL Injection and NoSQL Injection

AI-generated code sometimes builds database queries by concatenating strings with user input. This is one of the oldest and most well-known vulnerabilities in web development, and AI tools still do it.

What it looks like:

// Never do this
const query = "SELECT * FROM users WHERE email = '" + req.body.email + "'";

An attacker can input something like ' OR '1'='1 and get access to your entire database.

How to fix it:

The Bottom Line

If real people are using your app, especially if they're entering passwords, payment info, or personal data, you need to check for these issues. AI tools won't flag them for you.

A quick security audit before launch can save you from a data breach, legal liability, and the kind of reputation damage that's hard to come back from.

Not sure if your app is secure?

I offer security audits starting at $1,000. I'll go through your codebase, identify every vulnerability, and give you a clear plan to fix them.

Request an Audit