Read the screenplay: FANNIEGATE — $7 trillion. 17 years. The biggest fraud in American capital markets.

Deep Dive

Everything That Broke

139 things went wrong across 170 sessions. Here's every category of failure, what caused it, and what I learned. Transparency is the point.

139 Friction Incidents Across 3 Categories

68
Buggy Generated Code
49% of total
49
Wrong Initial Approach
35% of total
22
Parallelization & Deploy Conflicts
16% of total

Despite all of this, 94% of session goals were still achieved.

01

Buggy Generated Code

68 incidents

The #1 friction point. Claude would write code that looked right but had subtle bugs — undefined nested properties, incorrect method signatures, unsupported syntax for specific framework versions, and API methods that don't actually exist. Each bug required an iteration cycle to catch and fix.

Real examples:

  • Used template syntax that worked in one framework version but was unsupported in the version we were actually running
  • Called API methods that sounded right but didn't exist — confidently writing code against a hallucinated interface
  • Nested property chains like data.data.id that were undefined at runtime because the response shape was different than assumed
  • Type mismatches that passed the AI's internal reasoning but failed CI — boolean fields where datetime was expected
What I Changed: Ask Claude to validate its assumptions about APIs and frameworks BEFORE writing code. Smaller, incremental changes catch bugs earlier than large sweeping changes.
02

Wrong Initial Approach

49 incidents

Claude would confidently start down the wrong path — targeting the wrong deployment environment, building features nobody asked for, committing directly to the main branch instead of using the PR workflow, or jumping ahead before confirming the approach.

Real examples:

  • Targeted the wrong deployment environment because the alias names were similar — wasted 20 minutes before I noticed
  • Built an entire integration with a service I don't use because it seemed like a logical next step
  • Jumped ahead and started coding when I wanted it to wait for my review of the plan first
  • Committed directly to main instead of creating a feature branch, breaking the PR workflow
What I Changed: Be explicit about constraints upfront. State the target environment, the branch strategy, and whether you want a plan before execution. Don't assume Claude will infer your workflow preferences.
03

Parallelization & Deploy Conflicts

22 incidents

Running multiple Claude sessions simultaneously is powerful but dangerous. Sessions would fight over git locks, create competing commits, and trigger cascading deployment failures. One session would push a commit while another was mid-build, causing the deploy to fail and requiring manual intervention.

Real examples:

  • Two Claude sessions both tried to git push at the same time — git lock file blocked both, requiring manual cleanup
  • Four consecutive deploy failures because parallel agents were committing to the same branch simultaneously
  • The bash command queue got clogged from too many parallel agents, causing timeouts and lost work
  • One session's deploy was cancelled mid-build because another session pushed a new commit
What I Changed: Limit parallel sessions to 3-4 max. Never let two agents write to the same branch. Batch commits instead of pushing after every change. The throughput gains of parallelization are real, but only if you manage the concurrency.

Satisfaction Despite the Friction

Claude's analysis estimated satisfaction levels per session based on my responses, tone, and whether goals were achieved. Even with 139 friction incidents, the overall picture was overwhelmingly positive.

84%
Satisfied or better
(262 sessions)
54%
Likely Satisfied
(168 sessions)
13%
Dissatisfied
(40 sessions)
3%
Frustrated
(9 sessions)

Lessons Learned

Iteration is the strategy, not a failure mode

Getting it wrong the first time isn't a problem when iteration takes 3 minutes. The 139 friction incidents sound bad until you realize they were resolved in an average of 2-3 cycles each. A human fixing 139 bugs would take weeks. Claude fixed them in the same sessions they occurred.

Constraints must be stated, not implied

Claude doesn't know your deployment workflow, your branch strategy, or which environment you're targeting unless you say so. The 49 'wrong approach' incidents almost all came from unstated assumptions. A 30-second briefing at the start of each session would have prevented most of them.

Parallelism needs traffic control

Running 6+ sessions per day is incredible for throughput, but only if they don't step on each other. The fix is simple: dedicate branches, batch commits, and never let two agents push to the same repo at the same time. This one change cut my deploy failures by 80%.

The 94% number tells the real story

139 things went wrong, and 94% of goals were still achieved. The friction is real but manageable. The question isn't 'does AI make mistakes?' — it's 'does AI make mistakes fast enough that the net output still crushes manual development?' The answer is yes, overwhelmingly.

The Honest Take

AI-assisted development isn't magic. It's a faster version of the same messy, iterative process all software development has always been. The bugs are real. The wrong approaches are real. The deploy chaos is real. But the speed at which you recover from all of it — that's what makes it transformative.

Get Glen’s Updates

Investing insights, new tools, and whatever I’m building this week. Free. No spam.

Unsubscribe anytime. I respect your inbox more than Congress respects property rights.

More From This Series