
One of the key things that OpenClaw (aka Moltbot aka Clawdbot) brought to the table was making it easy for everyday users to get into agent orchestration. Give it access to your system, point it at some markdown files, set a cron job to check in, and suddenly you've got an assistant that does things while you sleep. That's an oversimplification of the project as a whole, but it captures the basics of how the "orchestration" happens.
When we say orchestrating agents we're talking about getting agents to take on tasks autonomously or even spawn their own sub-agents to tackle pieces of a larger problem.
As agents become more capable, agentic orchestration is a skill worth adding to your repertoire. We've come a long way from using AI as super powered autocomplete, and honestly, if that's all you're doing with AI as a developer you're already lagging behind. You should at least be using agents within your IDE (not just auto-completions) if not firing multiple agents off via the CLI in YOLO mode. Basically if you're not at least dabbling around Stage 5 from Steve Yegge's Gas Town you should make a plan to get there.
I'm by no means saying that's the only way you should be developing. But the practices are changing and the best way to stay on top of the tools is to use them.
We've gone from prompt engineering to context engineering — making sure your agents have the right information at the right time. AI can be a force multiplier, but it multiplies both the good and the bad. Give an agent great context and clear guardrails and it'll surprise you. Give it a vague prompt, too much context, and root access and, well, you may be racing back to a terminal to stop OpenClaw from deleting your inbox.
There's a good chance a lot of our value going forward will be derived from orchestrating agents rather than writing lines of code ourselves. Let's dig into this week's links.
ArticleThe A.I. Disruption We've Been Waiting for Has ArrivedThe New York Times
Paul Ford makes the point that we've already hit the an inflection point with AI tools in the software industry. We've seen the impact first-hand because it turns out AI tools are good at writing code. You can fire off a quick task from your phone and get to a working prototype next time you check-in. It's made writing code cheap. Which opens up the door for more applications to be built across industries that may have been limited by the cost of code previously.
ArticleWriting code is cheap now
If the bit about code being cheap wasn't clear from the last article, let's really hammer that point home. You can easily ask ChatGPT or Claude to dump thousands of lines of code out for you in a few minutes. Simon Willison posits that delivering good code still has a cost. Good code is still going to be important and valuable especially at enterprise level and public facing applications. AI tools may get better at it but there are still opportunities for engineers to orchestrate and steer what gets written.
ArticleHarness Engineering: Leveraging Codex in an Agent-First WorldOpenAI
A good read (or listen) into how OpenAI used its Codex agents to build software with zero lines of manually written code. They approached the experiment with the goal to let humans steer and agents execute. The article gets into the tooling they needed in the repo to make sure agents had proper context and could autonomously execute over long periods of time with no human interaction. After all the scarcest resource is human time and judgement.
ArticleAI Doesn't Reduce Work — It Intensifies ItHarvard Business Review
Speaking of human time and attention, HBR studied how AI tools changed work habits at a tech company. They found that AI tools let workers expand their task horizon into responsibilities that typically belonged to others. They also identified more multitasking and blurring of work and non-work time. The researchers describe a self-reinforcing cycle: AI speeds things up, speed raises expectations, expectations widen scope, and suddenly you're doing more work than before. When building a new feature is just a few prompts away, why not send it off after hours or over lunch. Ask me how I know.
Drama
The OpenClaw Saga Continues
Remember OpenClaw? The AI agent that went from weekend project to 196,000 GitHub stars in 90 days? OpenAI acqui-hired its creator, Peter Steinberger, to lead their personal agents development. The project itself moves to an independent foundation with OpenAI as a financial sponsor.
Meanwhile, the AI companies are drawing lines around their subscription plans. Anthropic cracked down on third-party tools piggybacking on Claude subscriptions. Now Google is doing the same, restricting AI Ultra subscribers who connected via OpenClaw's OAuth. It seems like your safest bet if you want to use OpenClaw on a subscription plan is going with OpenAI.
WatchDev Containers for AI Coding Agents
I love a good YOLO prompt giving an agent the keys to run whatever commands it deems necessary to get a task done. However, it's probably not best practice if you aren't OK with a rogue agent wiping your entire machine. Dev containers give your agents a sandbox to work in so they can't reach further than you'd like. This walkthrough covers the setup and even demos a container blocking a malicious script. If you're running agents with any kind of autonomy this is worth 20 minutes of your time.
Thanks for reading,
Jason

