
Nano Banana Pro
Last week we looked at Cloudflare's Agents Week and the building blocks they shipped to make their platform a serious option for running agents. Just before that, Anthropic launched their managed agents (which we covered in Let the Agents Dream). Not to be outdone, OpenAI and Google joined the party this week. Every major platform now wants to be the infrastructure your agent stack runs on.
Which is great. It also means the bottleneck moves to us humans.
Spotting which of your day-to-day tasks are actually agent-shaped is the next skill to harness. The repeatable items in your week with structured outputs that can be broken down into smaller steps. Those are the ones worth handing off.
That's not as simple as it sounds, of course. We've been able to automate things for years by connecting tools and workflows. The differentiator with AI tools is they make that setup much easier, usually using natural language. The hard part is identifying the individual items that go together to actually get something done and defining enough structure that a probabilistic AI will do it correctly.
As the agent platforms race to add memory, security, tool calling, and whatever other features agents need to run, it'll be on us to get better at defining the systems for those agents to work in.
📖 Read
Building Effective AI Agents
Anthropic
Some useful guidelines for building agents from Anthropic. Start with the simplest thing that works and only add complexity when it demonstrably improves outcomes. Most of what people call "agents" right now are actually workflows, and that's a feature not a bug.
📖 Read
Your AI agents have one job each. Most builders give them four.
Chris Lema
Chris Lema proposes that LLMs should only be used for three jobs in agentic workflows: generation in an open space, judgment over fuzzy criteria, and extraction from unstructured input. The rest should be engineered around the LLM to keep it guided and moving forward. Identify the tasks LLMs are good at and put code around the things they aren't.
📰 News
Introducing workspace agents in ChatGPT
OpenAI
OpenAI dropped their agent infrastructure for workspaces. They're very easy to get started with and can be shared across your Business or Enterprise plan members. That opens up some really cool use cases. You can build an agent from a single prompt and watch as ChatGPT takes over and handles the setup, or start with one of their many examples.
📰 News
Introducing Gemini Enterprise Agent Platform
Google Cloud
Google dropped their own agent platform with build, scale, govern, and optimize layers. Interestingly, Google's platform isn't locked down to Gemini models — Claude Opus, Sonnet, and Haiku are all first-class options in the model garden.
🔧 Tool
Conductor
It has been a minute since I've come across a nice, new app to feature. Conductor is a Mac app letting you run parallel Codex and Claude Code agents in their own isolated workspaces. It looks like a nice option for getting some of the built-in sandboxing of the Codex or Claude Code desktop apps but without the vendor lock-in. I'll be downloading that and testing it out this week.
Thanks for reading, Jason

