Read the screenplay: FANNIEGATE — $7 trillion. 17 years. The biggest fraud in American capital markets.

Deep Dive

What's Next

825 commits taught me where AI-assisted development works, where it breaks, and where it's heading. From interactive pair programming to orchestrated autonomous pipelines — the shift is already underway.

The Evolution of AI-Assisted Development

Where We Were · 2023–2024

Autocomplete on Steroids

Copilot, inline suggestions, single-file edits. AI helped you write code faster but you still did all the thinking. It was a typing accelerator, not a development partner.

Single file, single action, human-driven

Where We Are · 2025–2026

AI as Development Partner

Claude Code, Cursor, Windsurf. Multi-file edits, sub-agents, parallel sessions. The AI understands your codebase, makes architectural decisions, and handles iteration loops. You steer, it executes.

Multi-file, multi-agent, human-steered

Where We're Going · 2026–2027

Orchestrated Autonomous Pipelines

Coordinator agents that decompose tasks, spawn specialized sub-agents, manage branch isolation, handle CI failures, and merge clean code — with human review only at merge time. The developer becomes an architect and reviewer, not an executor.

Multi-repo, autonomous, human-reviewed

What I'm Building Toward

Based on the friction patterns from 170 sessions, these are the four capabilities that would have the highest impact on the workflow I've developed.

01

Parallel Agent Swarms

The Problem

Today, running multiple AI agents simultaneously causes git lock-ups, competing commits, and deploy chaos. 22 of my 139 friction incidents came from parallelization conflicts.

The Solution

A structured orchestration pattern where a coordinator agent decomposes work into independent units, assigns each to a sub-agent on its own branch, and a merge agent handles integration. No two agents ever touch the same branch.

Expected Impact

10x content generation speed without the deploy failures. Instead of 175 pages with occasional chaos, it's 175 pages with zero conflicts.

Current Status

Possible today with careful prompting. Will be built-in within a year.

02

Autonomous CI/CD Loops

The Problem

My enterprise platform sessions burned hours on CI retry cycles — sometimes 7+ rounds of run tests, read error, fix code, re-run. The AI can do this loop, but today I still babysit it.

The Solution

An autonomous agent that runs the full CI suite, parses every failure into structured data (file, line, error type, message), applies fixes, and re-runs. If it hits the same error twice, it changes strategy rather than retrying the same fix.

Expected Impact

10-hour debug sessions compressed to 2 hours. The agent treats CI failures as a to-do list, not a crisis.

Current Status

Partially working today. Needs better error classification to be fully autonomous.

03

Pre-Flight Validation Agents

The Problem

68 of my 139 friction incidents were buggy generated code. Many were catchable before commit — missing framework-specific patterns, incorrect API usage, type mismatches. The AI writes the bugs and then the AI fixes the bugs.

The Solution

A dedicated validation agent that runs after every set of changes, checking against a library of known failure patterns specific to the project. Think of it as a linter that understands your deployment pipeline, your framework quirks, and your past failures.

Expected Impact

Eliminates entire categories of recurring bugs. The validation agent learns from every failure, so the same mistake never ships twice.

Current Status

The building blocks exist today (hooks, custom skills). Full implementation coming.

04

Cross-Session Memory & Context

The Problem

Each Claude Code session starts fresh. I re-explain my branch strategy, my deployment workflow, my framework constraints every time. The 49 'wrong approach' incidents were mostly from Claude not having context it should have already learned.

The Solution

Persistent project memory that accumulates across sessions — not just instructions in a config file, but learned patterns, past failures, and workflow preferences that automatically inform every new session.

Expected Impact

The AI gets smarter about your specific project over time. Session 200 is dramatically better than session 1, not because the model improved, but because the context did.

Current Status

Partially available via CLAUDE.md and memory systems. Getting better fast.

Predictions

6 Months

Agent orchestration built into tools

Branch-per-agent, automatic merge conflict resolution, and deploy sequencing will be first-class features, not workarounds.

1 Year

Autonomous CI loops are standard

AI agents will handle the full test-fix-retest cycle without human intervention. You'll only get pinged when something genuinely novel goes wrong.

2 Years

Solo developers ship at team scale

One developer with orchestrated AI agents will consistently output what a 5-person team does today. The bottleneck shifts from coding to product decisions and architecture.

5 Years

Software development is unrecognizable

Most code is written, tested, deployed, and maintained by AI systems. Developers describe what they want in natural language, review the output, and focus on the parts that require genuine human judgment — user empathy, business strategy, creative direction.

The Takeaway

We're at the “email replaced fax machines” stage of AI-assisted development. The people who figure out the workflows now — who learn to steer AI agents effectively, who build the muscle memory for iteration-over-perfection — will have an enormous advantage as these tools mature. The gap between AI-native developers and everyone else is going to get very wide, very fast.

Get Glen’s Updates

Investing insights, new tools, and whatever I’m building this week. Free. No spam.

Unsubscribe anytime. I respect your inbox more than Congress respects property rights.

More From This Series