Deep Dive
Everything That Broke
139 things went wrong across 170 sessions. Here's every category of failure, what caused it, and what I learned. Transparency is the point.
139 Friction Incidents Across 3 Categories
Despite all of this, 94% of session goals were still achieved.
Buggy Generated Code
68 incidentsThe #1 friction point. Claude would write code that looked right but had subtle bugs — undefined nested properties, incorrect method signatures, unsupported syntax for specific framework versions, and API methods that don't actually exist. Each bug required an iteration cycle to catch and fix.
Real examples:
- Used template syntax that worked in one framework version but was unsupported in the version we were actually running
- Called API methods that sounded right but didn't exist — confidently writing code against a hallucinated interface
- Nested property chains like data.data.id that were undefined at runtime because the response shape was different than assumed
- Type mismatches that passed the AI's internal reasoning but failed CI — boolean fields where datetime was expected
Wrong Initial Approach
49 incidentsClaude would confidently start down the wrong path — targeting the wrong deployment environment, building features nobody asked for, committing directly to the main branch instead of using the PR workflow, or jumping ahead before confirming the approach.
Real examples:
- Targeted the wrong deployment environment because the alias names were similar — wasted 20 minutes before I noticed
- Built an entire integration with a service I don't use because it seemed like a logical next step
- Jumped ahead and started coding when I wanted it to wait for my review of the plan first
- Committed directly to main instead of creating a feature branch, breaking the PR workflow
Parallelization & Deploy Conflicts
22 incidentsRunning multiple Claude sessions simultaneously is powerful but dangerous. Sessions would fight over git locks, create competing commits, and trigger cascading deployment failures. One session would push a commit while another was mid-build, causing the deploy to fail and requiring manual intervention.
Real examples:
- Two Claude sessions both tried to git push at the same time — git lock file blocked both, requiring manual cleanup
- Four consecutive deploy failures because parallel agents were committing to the same branch simultaneously
- The bash command queue got clogged from too many parallel agents, causing timeouts and lost work
- One session's deploy was cancelled mid-build because another session pushed a new commit
Satisfaction Despite the Friction
Claude's analysis estimated satisfaction levels per session based on my responses, tone, and whether goals were achieved. Even with 139 friction incidents, the overall picture was overwhelmingly positive.
Lessons Learned
Iteration is the strategy, not a failure mode
Getting it wrong the first time isn't a problem when iteration takes 3 minutes. The 139 friction incidents sound bad until you realize they were resolved in an average of 2-3 cycles each. A human fixing 139 bugs would take weeks. Claude fixed them in the same sessions they occurred.
Constraints must be stated, not implied
Claude doesn't know your deployment workflow, your branch strategy, or which environment you're targeting unless you say so. The 49 'wrong approach' incidents almost all came from unstated assumptions. A 30-second briefing at the start of each session would have prevented most of them.
Parallelism needs traffic control
Running 6+ sessions per day is incredible for throughput, but only if they don't step on each other. The fix is simple: dedicate branches, batch commits, and never let two agents push to the same repo at the same time. This one change cut my deploy failures by 80%.
The 94% number tells the real story
139 things went wrong, and 94% of goals were still achieved. The friction is real but manageable. The question isn't 'does AI make mistakes?' — it's 'does AI make mistakes fast enough that the net output still crushes manual development?' The answer is yes, overwhelmingly.
The Honest Take
AI-assisted development isn't magic. It's a faster version of the same messy, iterative process all software development has always been. The bugs are real. The wrong approaches are real. The deploy chaos is real. But the speed at which you recover from all of it — that's what makes it transformative.
Get Glen’s Updates
Investing insights, new tools, and whatever I’m building this week. Free. No spam.
Unsubscribe anytime. I respect your inbox more than Congress respects property rights.