- Previously on Tech
- Posts
- Is Keyboard Coding Cooked?
Is Keyboard Coding Cooked?
Claude, Copilot, and the reality of agentic “vibe coding.”

We're doing things a little differently this week. Instead of diving into five specific links, I’m sharing some thoughts on my experiences with Claude Code and GitHub Copilot’s recent agent-focused releases.
My initial goal was simple: see how much I could vibe-code my way through the $250 in credits Claude handed out to test Claude Code Web.
I’m happy to report I didn’t even come close. After a week of what felt like pretty heavy agentic vibe coding, I still have $211 left.
I also have:
A Next.js app for managing lists with Better Auth integration
A proof-of-concept baseball player guessing game in Svelte
Another Svelte app for generating depth maps via Transformers.js and Depth Anything
A Swift-based trivia app
A PR to replace an old library in my WordPress plugin
A beta premium version of that same plugin
Another plugin adding book-club features to a WordPress site for my mom
A Hacker News reader and comment analyzer built with GitHub Spark
The main bottleneck now is my time to test and code-review it all.
The List app has been my main focus. I’m using it as a proving ground to test AI coding tools inside a platform that’s still relatively new to me. I know the core concepts of Next.js, but I don’t have the best practices fully internalized the way I do with WordPress.

A screenshot of my work in progress list app. Built with Next.js, shadcn/ui, Better Auth, and a heavy helping of Claude Code and GitHub Copilot
Getting started can be a roadblock. I didn’t have great success getting agents to follow instructions for setting up a Next.js app with shadcn/ui, Better Auth integration, and Supabase. Even with step-by-step instructions, jumping between multiple install processes made the AI lose the plot and spin out, or at least get to a point where I wasn’t confident it was making the best choices (re: a platform I wouldn’t consider myself an expert in).
I had much better success scaffolding the bones of the application myself. The more specific and granular my tasks, the better the output, and the more clarity I had in the codebase and how everything connected.
My process became something like this.
When I’m actively coding, I’ll use Claude Code CLI when I want the AI to implement a specific feature or VS Code when I want more control or need to debug something myself.
When I’m wrapping up at night, I’ll kick off a few tasks in Claude Code Web, brush my teeth, then hand the newly generated PR to GitHub Copilot for a code review.
Copilot’s reviews have impressed me. It regularly catches issues with Next.js conventions, like using <Link> instead of <a> for internal navigation, as well as potential security or performance problems.
Most of the time, I’ll parse the review comments, make inline commits, or add notes and ask Copilot to handle the implementation.
The GitHub Agent HQ interface is slick and makes switching between desktop, mobile, and VS Code easy. I’m really interested in the workflow of discussing a feature in a meeting, kicking off a task to Copilot, and then picking it back up in VS Code at my desk.
I’m not entirely sure how I got access to GitHub Spark on my personal account, but it’s another area with a ton of potential — especially for building single-purpose LLM-integrated apps that can be shared internally in an organization. The Hacker News feed with comment analysis is a tiny example and took about three prompts. One of those prompts was just for a dark-mode toggle.
Overall, the integrations inside GitHub felt pretty tight. Copilot’s code reviews were better than I expected, especially when catching places where Claude hadn’t used the proper Next.js components or drifted from best practices.
I did find Claude Code Web to be very powerful, but it wandered off path at times. That might be on me for trying to shortcut things by pointing my CLAUDE.md at a folder of GitHub instructions.
The bigger takeaway is that AI can generate an impressive amount of code quickly, but it still needs real review to make sure it holds up. The more refined the task, the better the result. The challenge is that refining work requires a real understanding of the framework and codebase, and if AI wrote most of it, you don’t automatically get that understanding for free.
I’m starting to see AI coding tools the way I’ve always viewed WordPress. WordPress lowered the barrier for getting a site online, which has led to plenty of sloppily made, plugin-heavy sites, but it also created a deep, healthy ecosystem of experts who can do excellent work with it. AI is shaping up the same way. It lets more people onto the field, but the people who train, stay curious, and understand the tools at a deeper level will still be the ones who excel.
Thanks for reading,
Jason
And because I can’t help myself, here’s more reading on the tools I used throughout this process:
Claude Code Web
Anthropic’s agentic web-based platform for coding with Claude.
Claude Code Design Plugin
Anthropic’s attempt to make Claude Code better at frontend design and avoid the purple slop problem.
GitHub Agent HQ
GitHub’s interface for working with agentic tools from Anthropic, OpenAI, and others across the platform.
GitHub Spark
GitHub’s “prompt to app” answer to v0, Lovable, and Replit.
Awesome Copilot
A great repo full of instructions to keep agents aligned with best practices across different platforms.