5 Prompting Patterns That Get Better Code from AI Every Time
Here’s a pattern I see all the time: a developer opens their AI tool, types “build me a login form,” gets something mediocre back, and concludes that AI coding isn’t that useful yet.
The problem isn’t the AI. It’s the prompt.
The gap between a vague prompt and a specific one is the gap between “meh” code and code you’d actually ship. I’ve been refining my prompting approach for months, and these five patterns have made the biggest difference.
1. The Constraint Prompt
Instead of telling AI what to build, tell it what to build and what constraints to respect.
Weak prompt:
Build a user registration function
Strong prompt:
Build a user registration function in Python using FastAPI. It should accept email and password, hash the password with bcrypt, store the user in a PostgreSQL database using SQLAlchemy, and return the user ID. Don’t use any global database connections — accept the db session as a parameter.
The constraints do the heavy lifting. Every constraint you add eliminates a whole category of guesses the AI would otherwise make. You’re not just describing what you want — you’re describing the shape of the solution.
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from bcrypt import hashpw, gensalt
from pydantic import BaseModel, EmailStr
router = APIRouter()
class UserCreate(BaseModel):
email: EmailStr
password: str
@router.post("/register")
def register_user(user: UserCreate, db: Session = Depends(get_db)):
existing = db.query(User).filter(User.email == user.email).first()
if existing:
raise HTTPException(status_code=400, detail="Email already registered")
hashed = hashpw(user.password.encode(), gensalt())
new_user = User(email=user.email, password_hash=hashed.decode())
db.add(new_user)
db.commit()
db.refresh(new_user)
return {"user_id": new_user.id}
Notice how the constraints led to clean dependency injection, proper validation, and duplicate checking — without you having to ask for any of that explicitly.
2. The Example-Driven Prompt
When you want AI to match a specific style or pattern, show it an example first.
Weak prompt:
Write tests for my API endpoints
Strong prompt:
Here’s how I write API tests in this project:
def test_get_users(client, seed_users): response = client.get("/api/users") assert response.status_code == 200 data = response.json() assert len(data["users"]) == len(seed_users)Follow this same pattern — using the
clientfixture, keeping assertions focused, and testing one behavior per function. Write tests for the POST /api/users and DELETE /api/users/:id endpoints.
AI is incredibly good at pattern matching. When you show it an example from your actual codebase, it picks up on naming conventions, assertion styles, fixture usage, and structure. The output will feel like you wrote it.
3. The Role Prompt
Give the AI a role that shapes how it approaches the problem.
Weak prompt:
Review this code
Strong prompt:
You are a senior backend engineer reviewing a pull request. Focus on security issues, performance problems, and error handling gaps. Be specific — point to exact lines and explain why they’re problematic. Don’t comment on style or formatting.
The role sets expectations about depth and focus. A “senior engineer” gives different feedback than a “junior developer learning the codebase.” An “API designer” structures endpoints differently than a “frontend developer.”
This pattern is especially powerful for code review, architecture discussions, and debugging.
4. The Iterative Refinement Prompt
Don’t try to get perfect code in one shot. Use a sequence of prompts that build on each other.
Round 1: “Build a basic cache class with get, set, and delete methods using a dictionary.”
Round 2: “Add TTL support — each entry should expire after a configurable number of seconds.”
Round 3: “Add a max size limit using LRU eviction. When the cache is full, remove the least recently accessed item.”
Round 4: “Make it thread-safe using a threading lock.”
Each round adds one clear requirement. The AI builds on its previous output, and you can verify each layer works before adding complexity. This is way more reliable than asking for everything at once.
Think of it like building with Lego. One brick at a time, checking stability as you go.
5. The Anti-Pattern Prompt
Tell the AI what you don’t want. This is surprisingly effective at avoiding common pitfalls.
Weak prompt:
Build a React form component
Strong prompt:
Build a React form component for user settings. Use controlled components. Do NOT use any external form libraries. Do NOT use useEffect for form validation — validate on submit only. Do NOT add inline styles — use CSS class names. Keep the component under 80 lines.
Negative constraints are powerful because AI models have seen millions of code examples, and they tend toward common patterns — which might not be your preferred patterns. Saying “don’t do X” steers the output away from defaults you’d have to refactor later.
The Meta-Pattern
If you look at all five patterns, there’s a theme: specificity reduces rework.
Every minute you spend writing a better prompt saves you five minutes of editing, debugging, or throwing away code. The best vibe coders I know spend more time thinking about their prompts than most people expect. It’s not laziness — it’s leverage.
Start with one of these patterns in your next coding session. Pick the one that fits your current task, and notice the difference in quality. Once you see it, you’ll never go back to vague prompts.