Nano Banana Pro with the Surge can addition to give that 90s fan cave feel.

We may have just had a skills issue, but it's a concept that's been top of my mind recently. Only slightly edged out by the amount of basketball I've been watching. And yes, I did create a skill to help analyze my March Madness games. No, it did not improve my personal skills at picking games. Even with the improved data.

I've been thinking a lot lately about getting people on our team to adopt skills and how useful they can be. The idea I've grabbed onto is how skills improve on the general knowledge of models by giving them specific expertise to pull out when they need it. Just like people do in any given situation. If you're playing a basketball game you're thinking about dribbling, running plays, making a pass. You're not drawing on your baseball skills.

It's a good concept for helping people understand how skills can impact the results they get from AI tools. And it matters more than it sounds. Context windows are big but not infinite, and stuffing everything an agent might need into a single prompt is how you end up in the dumb zone where the agent starts forgetting earlier decisions. Skills keep things modular. Load what you need, when you need it, and get better results in the end.

🔧 Tool
Autoresearch
Andrej Karpathy
This is a small Python script that lets an AI agent run research experiments autonomously overnight. Give it a GPU, a training setup, and a program.md with instructions and constraints, and it'll run roughly 100 experiments while you sleep. This pattern scales beyond research labs though. Anywhere you can have a model grade its own work against specific outcomes and loop continuously is worth exploring. It's the Ralph Loop applied to knowledge work.

🎥 Watch
A Primer on Using Agent Skills
If you haven't started using skills in your agentic workflows this is a good place to start. Skills have become a universal format — the same SKILL.md files work across Claude Code, Cursor, Gemini CLI, Codex CLI, and others. They can trigger automatically when the agent recognizes a relevant task or be called explicitly. The real power is keeping your agent's context clean and focused rather than front-loading every instruction into one massive prompt.

📰 Article
The Question That Changes How You Adopt AI on Your Team
Chris Lema
Lema makes a point I keep coming back to: instead of asking "what tasks take time" you should ask "what decisions do we make repeatedly, and what information do those decisions require?" The first question gets you a list of tools. The second gets you leverage. He extends it further — what decisions are you not making because the information cost is too high? That's where AI creates real value.

Pair it with his piece on how orchestration separates amateurs from experimenters. His argument that "the sophistication isn't in the output, it's in the orchestration" ties back to the skills conversation nicely. Building a 7-stage pipeline with voice profiles and quality frameworks isn't lazy — it's the new version of showing your work.

📔 Standard
AIUC-1
As agents get more autonomy we're going to need standards for how they behave. AIUC-1 is the first security, safety, and reliability standard specifically for AI agents. Developed with folks from Anthropic, Stanford, MIT, and MITRE. It's still early, but if you're building agents for anything beyond personal projects it's worth keeping on your radar.

✏️ Tool
Google Stitch
Google Labs
Google's AI-powered UI design tool has gone from interesting experiment to something I actually want to use. Describe a screen — text, sketch, or just talk to it — and it generates high-fidelity web and mobile interfaces with usable HTML and CSS.

Thanks for reading,
Jason

Keep Reading