<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Context Windows on No Semicolons</title><link>https://nosemicolons.com/tags/context-windows/</link><description>Recent content in Context Windows on No Semicolons</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 13 May 2026 10:14:58 +0000</lastBuildDate><atom:link href="https://nosemicolons.com/tags/context-windows/index.xml" rel="self" type="application/rss+xml"/><item><title>The AI Code Generation Memory Problem: How I Learned to Work Within Context Windows Instead of Fighting Them</title><link>https://nosemicolons.com/posts/ai-code-generation-memory-problem-context-windows/</link><pubDate>Wed, 13 May 2026 10:14:58 +0000</pubDate><guid>https://nosemicolons.com/posts/ai-code-generation-memory-problem-context-windows/</guid><description>&lt;p>Have you ever been deep in a coding session with Claude or GPT-4, everything flowing perfectly, when suddenly the AI starts suggesting code that completely ignores the architecture you spent an hour establishing together? Welcome to the context window memory problem—the invisible wall that every AI-assisted developer eventually hits.&lt;/p>
&lt;p>I used to think this was just a temporary limitation that would disappear with the next model update. Turns out, I was approaching it all wrong. Instead of waiting for infinite context windows (which may never come), I learned to work &lt;em>with&lt;/em> these constraints rather than against them. The result? More focused code, better documentation, and surprisingly, higher quality output.&lt;/p></description></item></channel></rss>