“Build me a dashboard.”

I see prompts like this all the time. And every time, the developer ends up frustrated because the AI produced something that looks nothing like what they had in mind.

But here’s the thing: the AI did exactly what you asked. You just didn’t ask for very much.

Prompting is a skill. Like any skill, the gap between beginner and intermediate is huge, and closing that gap doesn’t take long — just a bit of intentionality. Let me show you how to go from vague prompts to ones that get exactly what you need.

The Anatomy of a Good Prompt

Every effective code prompt has four elements:

  1. Context — what already exists
  2. Task — what you want to happen
  3. Constraints — how it should be done
  4. Output format — what the result should look like

Let’s see how adding each element transforms a prompt.

Just the task (bad):

Write a function to process payments

Task + context (better):

Write a function to process payments in my Express.js app. We’re using Stripe’s API and the user’s payment method is already saved.

Task + context + constraints (good):

Write a function to process payments in my Express.js app. We’re using Stripe’s API and the user’s payment method is already saved. Handle these cases: successful payment, card declined, and network errors. Use try/catch with specific error types. Log errors but don’t expose Stripe error details to the client.

Task + context + constraints + output format (great):

Write a function to process payments in my Express.js app. We’re using Stripe’s API and the user’s payment method is already saved. Handle these cases: successful payment, card declined, and network errors. Use try/catch with specific error types. Log errors but don’t expose Stripe error details to the client. Return an object with { success: boolean, transactionId?: string, error?: string }.

Each layer of specificity eliminates guesswork. The AI goes from infinite possibilities to a narrow, well-defined solution.

Common Vague Prompts (and How to Fix Them)

“Add authentication”

This could mean a hundred different things. Fix it by being specific:

Add JWT-based authentication to the Express API. Use the jsonwebtoken package. Create a middleware function that extracts the token from the Authorization header (Bearer format), verifies it, and attaches the decoded user object to req.user. If the token is missing or invalid, return 401 with { error: "Unauthorized" }. Apply the middleware to all routes under /api/ except /api/auth/login and /api/auth/register.

Notice how every decision is made upfront: JWT (not sessions), Bearer format (not cookies), specific error format, specific excluded routes. The AI doesn’t have to guess any of this.

“Make it faster”

Performance optimization is context-dependent. Fix it:

The GET /api/products endpoint is taking 800ms to respond. It runs a database query that joins products with categories and reviews. Add query-level caching using node-cache with a 5-minute TTL. Invalidate the cache when a product is created, updated, or deleted. Log cache hits and misses.

Now the AI knows what’s slow, what strategy you want, and the specific requirements around cache invalidation.

“Write tests”

Fix it:

Write unit tests for the calculateShipping function in utils/shipping.js. Use Jest. Test these scenarios: domestic standard shipping, domestic express shipping, international shipping, free shipping threshold ($50+), invalid zip code, and zero-weight items. Each test should use the arrange-act-assert pattern. Mock the lookupZipCode dependency.

The AI now knows the testing framework, the specific scenarios, the pattern to follow, and what to mock. The output will be comprehensive and consistent.

The “What Would a Senior Dev Ask?” Technique

When you’re not sure what constraints to add, imagine handing this task to a senior developer. What questions would they ask before starting?

  • “What error handling do you want?”
  • “Should this be async?”
  • “What’s the return type?”
  • “Are there any existing patterns I should follow?”
  • “What edge cases matter?”

Answer those questions in your prompt. Every question you answer proactively is a round of back-and-forth you skip.

When to Be Brief

Not every prompt needs a paragraph. Short prompts work great when:

  • The change is mechanical: “Rename getUserData to fetchUser across the codebase”
  • The context is obvious: “Add a try/catch to this function” (when you’ve selected the function)
  • You’re iterating: “Also handle the case where the array is empty” (continuing a conversation)

The rule of thumb: be as brief as possible while being unambiguous. If there’s only one reasonable interpretation of your prompt, it’s specific enough.

The Three-Second Test

Before hitting enter on a prompt, take three seconds to ask: “Could a smart developer interpret this differently than I intend?”

If yes, add more specificity. If no, ship it.

This simple habit has saved me more time than any other prompting technique. It catches the prompts that would have produced something wrong — and a bad result that you have to throw away costs more time than the few seconds it takes to write a better prompt.

Start Today

Pick your next coding task. Before you prompt your AI tool, spend 30 seconds thinking about context, constraints, and output format. Write a prompt that’s specific enough that there’s only one reasonable interpretation.

Compare the result to what you usually get. I’m willing to bet it’s noticeably better.

The gap between “AI coding kind of works” and “AI coding is incredibly productive” often comes down to nothing more than the quality of your prompts. And that’s entirely within your control.