You got human in your AI 📺 Previously on Tech

New AI copyright rules just dropped! Plus iterative vs generative AI.

Applying a little human touch

I was listening to Reid Hoffman’s Possible podcast on my commute this week and one thing said jumped out to me. It was something along the lines of “focus on the positive outcomes of AI adoption,” to poorly paraphrase.

That struck me as a nice way to approach thinking about how AI can be a useful tool in work and life.

I’m personally not ready to turn the keys to my laptop over to a LLM at this point, but I do see the benefits in terms of efficiency when used thoughtfully.

A subsequent conversation with one of my more AI-skeptical colleagues on the benefits of AI in detecting cancers as opposed to, say, generating a video of George Washington doing a sick ollie heelflip, helped me clarify my stance on AI tools in workflows.

That is to use AI as an iterative tool as opposed to a purely generative tool. (Let’s see an LLM make that connection over its lived journey and experiences.)

That’s how I approach using GitHub Copilot as a developer. I don’t see it as a developer replacement, but as a tool that lets me experiment more quickly and iterate on the output to create something unique and useful.

Iteration with generative AI is also how i approach image creation. Take the logo for this newsletter for example. I used Ideogram to generate some examples before finally taking it into Photoshop and putting my own spin on it.

🤿 Dive Deeper

Oh sick! New models just dropped!

OpenAI released a new model, an agent, and a rebrand this last week.

On the model front OpenAI released o3-mini, their latest cost-efficient reasoning model. o3-mini is available for Plus, Team, and Pro members and coming to Enterprise later. I always forget if I’m Plus or Pro because these naming conventions are not great and I’m easily distracted.

The bigger drop, however, was the announcement of their deep research agent Which sound intriguing, but I haven’t tested because I’m not paying $200 a month for the (checks notes) Pro plan. Take it away OpenAI:

Today we’re launching deep research in ChatGPT, a new agentic capability that conducts multi-step research on the internet for complex tasks. It accomplishes in tens of minutes what would take a human many hours.most cost-efficient model in our reasoning series, available in both ChatGPT and the API today.

Its key feature is that it iterates (sensing a theme here) over its own work as it browses the web and completes its research.

One of the demos OpenAI featured in the release was comparing Deep research’s (not DeepSeek, totally coincidental name 🙄 ) against GPT-4o in tracking down a TV show based on the user’s recollection of events that happened in the episode. Now get me some AR glasses that tell me what random show / movie I’ve seen this side character in before so I can focus back on the story and we’re golden. (I spent way too long trying to pinpoint the voice of Dr. Dillamond in Wicked. “It’s Peter Dinklage!” my brain finally yelled a day later.)

The aforementioned OpenAI rebrand. It gives me some good cinema fake commercial for a SkyNet type company vibes. So pretty on point, I guess.

đź’ż More Hot Drops

In a bit of unsurprising news DeepSeek may not be very up to snuff from a safety perspective. Researchers said it went 0 for 50 in stopping attacked designed to generate harmful output.

Google made 2.0 Flash available to all Gemini app users and via their API while also releasing an experimental version of Gemini 2.0 Pro. As well as their own most cost-efficient model, 2.0 Flash Lite.

HuggingFace is already making progress on an open source version of Deep research.

📚 Read - OpenAI did an AMA on Reddit
My favorite bit is Sam Altman’s continued crusade against capital letter’s in the AMA description. And the drop that a 4o image generate may be coming in a “couple months”.

🎧 Listen - Possible podcast episode on Super Agency
I mentioned this earlier, but this episode formed the theme behind this issue, and also my skeet (whatever we’re call BlueSky posts) to PocketCasts for a feature to add voice notes when listening to an episode. 🤞 

Term of the Day

Mixture of Experts (MoE) - Not your friendly Springfieldian bar owner, but a technique used in training large language models (LLMs) where multiple specialized sub-models (experts) are trained on different aspects of the data. A subset of experts is activated for each input during training, reducing computational costs and improving efficiency

Next Time On

You may have noticed I switched from Previously on AI to Previously on Tech. Or you may not have. The point is I decided I don’t want to focus only on AI because, as a developer, I’m interested in many forms of tech. So expect things like coding, AR goggles, and more in coming episodes.

Hit reply and let me know your thoughts?

Until next time, take it away General Washington.