The Context Wars Are Coming

The race to capture, store, and reuse context might be the biggest opportunity in AI right now.

One area us humans still hold an advantage over our future robot overlords is building persistent, nuanced context over time. Our brains have evolved to create connections between memories and knowledge over a lifetime allowing us to make interesting and unique connections from lived experiences. AI context works a bit more literally in that it refers to how much information an LLM can hold in active memory during a conversation.

Today's top models generally operate in the 1 million token range (though there are those higher and lower) which comes out to roughly 750,000 words. Holding that much information word-for-word in working memory is an advantage over us humans. But the advantage fades over time as LLMs can begin to forget earlier information as it gets pushed beyond the context window.

AI companies are looking to address this context limitation with memory features. ChatGPT was early to get memory features that helped create a sense of continuity across conversations. Though Google and Anthropic have both rolled out similar features. The combination of context and memory can help lead to the best outputs from AI powered tools. Mastering context isn't just a technical challenge, it's a massive opportunity for the future.

Article
Apple working on MCP support to enable agentic AI on Mac, iPhone, and iPad
9to5Mac 
It looks like Apple is bringing model context protocol (MCP) into apps via their App Intents workflow. It is in early stages, but the idea would be allowing LLMs to interact directly with apps and rather than implement their own MCPs developers could leverage Apple's App Intents integration. This could open up some powerful workflows when (and if) its rolled out.

Release
Managing context on the Claude Developer Platform
Anthropic 
Anthropic introduced a new context management API alongside the release of Claude Sonnet 4.5. This helps manage what is actively held in context, cleaning out the junk, and works together with memories to allow for longer running agents. For coding, that may be clearing out test results while storing architectural guidance in memory that can be accessed more frequently to adhere to coding standards of a project.

Watch
Chrome Dev Tools MCP Server
Syntax.fm 
Chrome recently released a Chrome DevTools MCP server that allows AI coding assistants to interact with a real Chrome browser. It can read the logs, debug layout issues, and automate performance tests using lighthouse. Handling styling bugs has definitely been a gap with AI coding assistants so far. Allowing the AI tools to interact directly with a browser is extremely useful. The server spins up a new Chrome instance which is an important note since you don't necessarily want an AI coding tool accessing your browser with saved passwords or other sensitive information.

Dev Tool
Context7 
LLMs are trained on a snapshot of data and not necessarily up-to-date information. If you try to code with a newer platform or updated versions you may get varying results. Web search in LLMs has helped with this to an extent. Context7 helps further improve documentation by offering both a MCP server for using the latest docs as well as copy-and-pasteable versions to drop in any chat agent.

Also available as a VS Code extension.

Release
Deploy your own AI vibe coding platform — in one click!
Cloudflare
As part of their birthday week at the end of September, Cloudflare released their VibeSDK platform allowing you to build out your own vibe coding platform with the click of a button. My head immediately jumps to building an internal vibe coding platform allowing people to build single purpose applications and prototypes following a specific set of business rules and design elements.

Thanks for reading,
Jason