You asked Cursor to add user authentication to your app. Five minutes later, users can sign up and log in. There is a login form, a registration page, and protected routes that redirect unauthenticated visitors. It works perfectly.
What you did not see is that the AI stored the JWT signing secret as the string supersecretjwt directly in your source code. Or that login sessions never expire. Or that the password reset flow leaks whether an email address exists in your database. Or that any user can access any other user's account by modifying a single cookie value.
These are not edge cases. A 2026 study by the Cloud Security Alliance found that 42% of AI-generated authentication code uses hardcoded secrets. In a separate assessment by GuardMint, 91.5% of 200+ vibe-coded applications contained at least one exploitable vulnerability traceable directly to AI-generated code.
Why Auth Is the Worst Place to Trust AI
Authentication is the front door to your application. Every other security measure — database access controls, API rate limiting, data encryption — assumes that the system knows who is making the request. If authentication is broken, everything behind it is exposed.
AI coding tools are particularly bad at authentication because auth is inherently context-dependent. A correct login flow depends on your specific data model, your user roles, your compliance requirements, and your threat model. AI tools do not have this context. They generate auth code that is syntactically correct and functionally complete — users can log in and out — but riddled with security gaps that only become visible under adversarial conditions.
As WorkOS put it bluntly: "Vibe code everything except your auth." Authentication is the one part of your application where AI-generated code is most likely to create catastrophic vulnerabilities.
The Six Auth Vulnerabilities AI Tools Create
After reviewing dozens of vibe-coded applications at Diffian, we see the same authentication failures appear repeatedly. Each is a direct consequence of how AI coding assistants approach auth by default.
1. Hardcoded JWT Secrets
This is the most common and most dangerous pattern. When you ask an AI to implement JWT-based authentication, it generates a signing secret inline. That secret is what prevents attackers from forging authentication tokens. If it is predictable or exposed, anyone can create a valid admin token and walk into your application.
// AI-generated — works fine, completely compromised
const token = jwt.sign(
{ userId: user.id, role: user.role },
'supersecretjwt',
{ expiresIn: '7d' }
);
The string supersecretjwt is one of the most common JWT secrets found in AI-generated code. An attacker who guesses it — or finds it in your public Git repository — can forge tokens for any user, including administrators. The fix is to use a cryptographically random secret stored in environment variables, rotated regularly.
// Secret from environment, cryptographically random
const token = jwt.sign(
{ userId: user.id, role: user.role },
process.env.JWT_SECRET,
{ expiresIn: '1h' }
);
2. Sessions That Never Expire
AI tools often generate authentication tokens with extremely long lifetimes — seven days, thirty days, or no expiration at all. A token that never expires means a stolen credential grants permanent access. If a user's laptop is compromised, their session token works forever.
Production authentication requires short-lived access tokens (typically 15 minutes to 1 hour) paired with longer-lived refresh tokens that can be revoked. AI-generated code almost never implements this pattern. It creates a single long-lived token because that is simpler, and simplicity is what AI optimises for.
3. Broken OAuth Implementation
When you ask an AI to add "Sign in with Google" or "Sign in with GitHub", it generates an OAuth flow. Research shows that 42% of AI-generated OAuth code uses the deprecated Implicit Grant flow, which exposes access tokens in the browser URL. The secure alternative — the Authorisation Code flow with PKCE — is more complex, and AI tools default to the simpler, insecure option.
Even when the AI uses the correct OAuth flow, it routinely skips the state parameter. This parameter prevents Cross-Site Request Forgery (CSRF) attacks on your login flow. Without it, an attacker can trick a user into authenticating with the attacker's account, giving the attacker access to everything the user does next. ReversingLabs found that 63% of vibe-coded apps lack proper CSRF protection.
4. Missing Authorisation After Authentication
This is the gap between "who are you?" and "what are you allowed to do?" AI tools are reasonably good at verifying that a user is logged in. They are poor at enforcing what that user can access. The result: any authenticated user can access any other user's data.
Consider an API endpoint like /api/invoices/456. AI-generated code will verify that the request includes a valid authentication token. It will not verify that the authenticated user owns invoice 456. This is an Insecure Direct Object Reference (IDOR), and 71% of vibe-coded applications have inadequate authorisation checks according to ReversingLabs' 2026 research.
We covered the database-level consequences of this in our database security article, but the root cause is almost always in the authentication middleware — it confirms identity but never checks permissions.
5. Password Storage Without Proper Hashing
AI-generated code sometimes stores passwords with weak hashing (MD5, SHA-1) or, in the worst cases, in plain text. Even when it uses bcrypt correctly, it often sets an insufficient cost factor. A cost factor of 4 (which we have seen AI generate) can be brute-forced orders of magnitude faster than the recommended minimum of 10.
Additionally, AI tools rarely implement account lockout or rate limiting on login attempts. Without these controls, an attacker can attempt millions of password combinations against your login endpoint with no resistance.
6. User Enumeration via Error Messages
When a login attempt fails, AI-generated code typically returns different error messages depending on whether the email exists ("No account found") versus whether the password is wrong ("Incorrect password"). This tells an attacker exactly which email addresses have accounts in your system.
The same problem appears in password reset flows. If the reset endpoint says "Email not found" for non-existent accounts and "Reset link sent" for existing ones, an attacker can enumerate your entire user base. The secure approach — returning the same message regardless — is less helpful to the user, so AI tools do not default to it.
The Moltbook Incident: What Happens When AI Auth Ships to Production
In January 2026, Moltbook launched as an AI-powered social network. The founder publicly stated he "didn't write a single line of code." Within three days, security researchers at Wiz discovered that the application had exposed its entire production database, including 1.5 million API authentication tokens, 35,000 email addresses, and private messages.
The root cause: AI-generated code exposed the Supabase API key in client-side JavaScript without enabling Row-Level Security policies. Any user with browser developer tools could see the key and access the entire database directly.
Moltbook is the canonical example of what happens when AI-generated auth ships to production without engineering review. The app worked perfectly — until someone looked at the source code.
This is not an isolated case. Georgia Tech's Vibe Security Radar tracked 35 CVEs directly attributed to AI-generated code in March 2026 alone, up from 6 in January. The acceleration matches the adoption curve of vibe coding tools.
How to Fix It: An Auth Security Checklist
You do not need to rewrite your authentication system from scratch. Most AI-generated auth can be hardened in a focused sprint. Here is what to check and fix, in priority order.
Step 1: Audit Every Secret and Credential
Search your entire codebase — including Git history — for hardcoded secrets. Look for JWT signing keys, API keys, database passwords, and OAuth client secrets. Every secret must move to environment variables. Any secret that has ever been committed to a Git repository should be considered compromised and rotated immediately.
Use a tool like gitleaks or trufflehog to scan your Git history. Secrets committed three months ago and then removed are still in the history and still exploitable.
Step 2: Implement Proper Token Lifecycle
Replace long-lived tokens with short-lived access tokens (15 minutes to 1 hour) and secure refresh tokens. Refresh tokens should be stored in HTTP-only cookies — never in localStorage, which is accessible to any JavaScript running on the page, including injected scripts from XSS vulnerabilities.
Implement token revocation so that when a user logs out or changes their password, all existing sessions are invalidated. AI-generated code almost never includes revocation.
Step 3: Fix OAuth Flows
If you are using social login (Google, GitHub, etc.), verify that your implementation uses the Authorisation Code flow with PKCE, not the Implicit Grant. Confirm that the state parameter is generated, stored in the session, and validated when the OAuth provider redirects back. This prevents CSRF attacks on your login flow.
Step 4: Add Authorisation Checks to Every Endpoint
Every API endpoint that serves user-specific data needs a check: does the authenticated user have permission to access this resource? Implement this as middleware that runs before your route handlers, not as ad-hoc checks scattered through your code. Test it by logging in as User A and attempting to access User B's data — if it works, your authorisation is broken.
Step 5: Harden Password Handling
Verify that passwords are hashed with bcrypt (cost factor 10+), scrypt, or Argon2. Implement account lockout after 5-10 failed attempts, with exponential backoff. Add rate limiting to your login endpoint — a maximum of 10-20 attempts per IP per minute is a reasonable starting point.
Step 6: Eliminate User Enumeration
Login failures should return the same message regardless of whether the email exists: "Invalid email or password." Password reset flows should always say "If an account exists with that email, a reset link has been sent." This prevents attackers from building a list of valid email addresses in your system.
Should You Use an Auth Provider Instead?
For most vibe-coded applications, the answer is yes. Services like Auth0, Clerk, Supabase Auth, or Firebase Authentication handle the complexity of secure authentication — token management, OAuth flows, password hashing, session handling — so you do not have to implement it yourself.
The critical caveat: you still need to configure these services correctly. AI-generated code that integrates with Supabase Auth but exposes the service role key in the frontend — exactly what happened with Moltbook — is just as vulnerable as hand-rolled auth done badly. The auth provider handles the hard parts, but you are still responsible for not undermining it.
If your app handles sensitive data — health information, financial records, personal details — or needs to meet compliance standards like SOC 2 or ISO 27001, professional review of your auth implementation is not optional. It is a requirement.
The Bottom Line
Authentication is the one area where AI coding tools are most dangerous, because the consequences of getting it wrong are total. A SQL injection might expose one table. Broken auth exposes everything — every user's data, every admin function, every piece of sensitive information your application holds.
The good news is that auth vulnerabilities are fixable. A focused security audit of your authentication system takes hours, not weeks. The cost of not doing it is measured in breached accounts, stolen data, and the kind of incident that ends startups before they start.
If your vibe-coded app has users, authentication security is not a nice-to-have. It is the foundation everything else depends on.