The AI Code Comment Anti-Pattern: Why Generated Comments Are Making Your Codebase Worse
Have you ever opened a file and found comments like // This function calculates the total right above a function called calculateTotal()? If you’re using AI coding assistants regularly, you’ve probably seen this more than you’d like to admit.
I’ve been wrestling with this problem in my own projects lately. AI tools are incredible at generating code, but their default approach to comments often creates more noise than signal. After spending way too much time cleaning up AI-generated documentation, I’ve learned some hard lessons about when to keep, modify, or completely skip the comments our AI companions want to add.
Let me share what I’ve discovered about making AI-generated documentation actually useful.
The Comment Pollution Problem
The issue isn’t that AI can’t write comments—it’s that it often writes the wrong kind of comments. Most AI coding assistants default to describing what the code does rather than why it exists or how it fits into the bigger picture.
Here’s a classic example I generated with GitHub Copilot last week:
// Check if user is authenticated
function isAuthenticated(user) {
// Return true if user has token and token is not expired
return user.token && user.token.expiresAt > Date.now();
}
// Get user profile data
async function getUserProfile(userId) {
// Call API to fetch user data
const response = await fetch(`/api/users/${userId}`);
// Parse JSON response
const userData = await response.json();
// Return user data
return userData;
}
Every single comment here is redundant. The function names and code structure already tell us what’s happening. These comments add zero value and just create more text to maintain when the code inevitably changes.
But here’s where it gets worse: because these comments feel “complete,” we’re less likely to add the documentation that would actually help. We miss opportunities to explain the business logic, edge cases, or architectural decisions that future developers (including ourselves) will desperately need to understand.
What Makes Comments Actually Valuable
Good comments answer the questions that code can’t. They explain the “why” behind decisions, document non-obvious behavior, and provide context that isn’t immediately clear from reading the implementation.
Here’s how I’d rewrite that authentication example:
function isAuthenticated(user) {
// We check token expiration client-side as a performance optimization,
// but the server will always validate tokens independently
return user.token && user.token.expiresAt > Date.now();
}
async function getUserProfile(userId) {
const response = await fetch(`/api/users/${userId}`);
const userData = await response.json();
// Note: This endpoint returns cached data that may be up to 5 minutes stale.
// Use getUserProfileFresh() if you need real-time data for critical operations.
return userData;
}
Now the comments actually tell us something useful. They explain architectural decisions, performance considerations, and important caveats that aren’t obvious from the code itself.
Training Your AI to Write Better Comments
The good news is that AI tools can generate much more useful documentation—you just need to be more specific about what you want. Instead of letting the AI add comments automatically, I’ve started using targeted prompts that focus on the type of documentation I actually need.
Here are some prompting strategies that have worked well for me:
For business logic documentation:
Add comments explaining the business rules and edge cases for this user validation function. Focus on why certain checks exist, not what the code does.
For architectural context:
Document how this module fits into the larger system architecture. Explain any important dependencies or assumptions about data flow.
For maintenance notes:
Add comments highlighting any technical debt, known limitations, or areas where future developers should be careful when making changes.
The key is being explicit about the purpose of the documentation rather than just asking for “comments.”
A Practical Workflow for AI Documentation
I’ve settled into a workflow that helps me get the benefits of AI-generated documentation without the noise. Here’s my current approach:
First, I let the AI generate the initial code without worrying about comments. Then I review the code and identify specific areas where documentation would be genuinely helpful—complex business logic, architectural decisions, or non-obvious optimizations.
For those specific areas, I use targeted prompts like:
For this caching implementation, add a comment explaining:
1. Why we chose this eviction strategy
2. The performance implications
3. Any edge cases developers should know about
This gives me focused, useful documentation instead of generic descriptions of what each line does.
I also make liberal use of the AI’s ability to explain existing code when I’m working with unfamiliar codebases. Instead of generating comments, I ask questions like “What business problem is this solving?” or “What would break if I changed this logic?” The answers often reveal what documentation should be added.
When to Skip Comments Entirely
Sometimes the best AI-generated comment is no comment at all. I’ve learned to be ruthless about deleting AI comments that don’t add value, even when they’re technically accurate.
If the code is self-explanatory through good naming and clear structure, additional comments often just create maintenance overhead. The calculateTotal() example from earlier doesn’t need a comment—it needs better parameter names and maybe a more descriptive return type.
I also skip AI-generated comments for code that’s likely to change frequently. Comments and code have a tendency to drift apart over time, and AI-generated comments are often the first casualties when we’re moving fast and making changes.
Building Better Habits
The real solution isn’t better AI comments—it’s being more intentional about when and why we add documentation. AI tools are incredibly powerful allies in this process, but they work best when we’re specific about what kind of help we need.
Start by auditing your recent AI-generated code. Look for comments that just restate what the code already makes clear, and either improve them or delete them entirely. Then experiment with more targeted prompting for the areas where documentation would genuinely help future developers understand your decisions.
The goal isn’t to eliminate AI from your documentation workflow—it’s to use AI to create the kind of comments you’d actually want to read when you’re trying to understand unfamiliar code six months from now.