Three Things That Changed How I Work with AI
A year ago, I was burning API credits watching Claude loop endlessly on the same bug. By December, I was shipping features in days that would have taken weeks. The models got dramatically better over that stretch, but that alone didn't account for the change. Three workflow shifts made just as much difference.
Write the Spec First
I spent the first half of 2025 prompting AI agents the way most people do — describe a feature, watch it get built, fix the bugs. The agents would add functionality I never asked for. I'd get something working, then break it adding something else. The code accumulated without a plan, and every new feature fought the last one.
Spec-driven development fixed this. Instead of jumping into code, I describe the full app upfront: user flows, edge cases, data models, architecture, naming conventions. The agent builds toward a coherent architecture from the start instead of bolting features onto a growing mess.
The difference is stark. Without a spec, I'd build a feature, discover edge cases, refactor, find conflicts with other features, refactor again. With a spec, most of that thrashing disappears. The upfront investment in clarity pays back immediately.
I started with a heavy process — a third-party tool called AgentOS that generated dozens of planning documents. It worked, but it was cumbersome and burned through tokens. Eventually I distilled the essentials into a single Claude Code slash command I can run in any project. The tooling matters less than the discipline: define what you're building before you start building.
Speak Instead of Type
A course I took introduced smart dictation using Monologue. I didn't think I needed it. I type fast enough. But typing and thinking are different activities, and typing forces you to edit as you go. That editing interrupts your train of thought.
With dictation, I think out loud, explore ideas as I express them, and let AI clean up the stream into precise instructions. The nuance of what I actually mean comes through better when I'm not filtering it through my fingers.
I now dictate specs, feature descriptions, bug reports, even commit messages. The time I used to spend composing and editing gets spent thinking instead. Of all the workflow changes I made this year, this one surprised me the most.
Calibrate Your Code Review
My relationship with reviewing AI-generated code shifted over the year.
At the start, I wanted pure agentic flow — describe a feature, walk away, come back to working code. But early models would loop endlessly on bugs, forcing me deep into the code just to steer them back on track.
Then I swung the other way. A course recommended following along closely with every change. My comprehension improved, but speed collapsed. Rapid prototyping ground to a halt.
Now I match review depth to the situation. During prototyping, I barely look at the code. The agent handles implementation, and when bugs surface, I work with it to fix them. But for critical business logic heading to production, I go line by line.
I also stopped relying on my own review alone. I'll ask multiple models to audit the same code — each catches different things. Combined with automated checks (linting, type checking, credential scanning, tests), problems get caught systematically instead of depending on me reading every line.
The models and the tooling have continued to improve since I established these habits. Better task management, persistent memory, reliable sub-agent delegation — the tools themselves now enforce some of the discipline that used to require a rigorous process. But the underlying principle holds: match your attention to the stakes.
What Changed
The models improved significantly over 2025, and that matters. But better models with a bad workflow still produce bad results. Spec first, dictate instead of type, review at the right depth — these are the changes that turned AI-assisted development from a frustrating experiment into something I build with every day.