Four Claudes and a Gantt
Open DevTools on cloudnimbusllc.com/mf/delivery-timeline-v10 and you'll see this line print:
[cn-edit v0.1.0] editable timeline ready.
Drag bars or call window.__cnEdit.help() for the programmatic API.
Type window.__cnEdit.help(). Eleven verbs print with one-line docs. Type window.__cnEdit.moveTask('wi-001', '2026-04-20', '2026-04-22'). The bar moves. The dates persist through a reload.
Now open DevTools inside a Salesforce org on Delivery Hub. Different domain. Different DOM. Shadow roots everywhere. Same banner. Same eleven verbs. Same one-line call. The bar moves there too, through a different data path, against a real Salesforce custom object you can query with SOQL ten seconds later to confirm the dates changed.
Same API. Different DOM. That's the whole trick.
This is a post about how we got there — me and four Claudes, across three repos, over eight days. It's also a post about two bugs that taught me something. One returned silently from three files deep. The other built a Möbius strip in a React reducer and ran forever until the browser got tired and gave up without throwing. Both taught me something about debugging when the "engineer" on the other end of the console is an LLM agent that can't see the screen.
The cast
Four Claude Code instances, running concurrently:
- HQ — a chat window on my desktop. No direct filesystem action. Writes specs, drafts paste-ready messages for the other three, synthesizes status across repos. Think of it as the tech lead running standup.
- NG CC — rooted in
C:\Projects\nimbus-gantt. Owns the Gantt library source. Builds the IIFE bundle. Shipsdist/*. - CN CC — rooted in
C:\Projects\cloudnimbusllc.com. Owns the Next.js 16 / React 19 marketing-engineering site and its vendored copy of the template framework. - DH CC — rooted in
C:\Projects\Delivery-Hub. Owns the Salesforce managed package — LWC, Apex, static resources, deploy pipeline.
The product: a JavaScript Gantt chart that runs identically in three drastically different hosts. A Salesforce Lightning Web Component inside a managed package. A Next.js server-rendered page on a public marketing site. A Visualforce + Lightning Out mount that bypasses the FlexiPage chrome and renders the Gantt alone. Same IIFE bundle. Same config. Three mount paths.
Enterprise Gantt libraries — Bryntum, DHTMLX, Gantt.js — generally assume one DOM, one host, one rendering stack. Nimbus-gantt assumes the opposite: the host is hostile and different every time. That assumption paid off when the third surface (the chromeless VF+LO mount) became necessary to hide Salesforce's FlexiPage header without forking the shared codebase. Same engine, different gate.
The Gantt ships inside Delivery Hub, the Salesforce managed package, as the headline feature. WorkItem__c records — the custom object at the heart of every delivery team's planning — render as draggable bars with dates, priority lanes, parent-child hierarchy, and hour/budget rollups. Salesforce's native record pages are built for data integrity; they're not built for planning. Most teams solve the gap by shipping tickets into a second system (Jira, Linear, Asana) just to get a timeline view, and then spend the rest of the year reconciling the two. Delivery Hub puts the timeline on top of the Salesforce data other tools are re-copying.
The twist: same Gantt runs on cloudnimbusllc.com's /mf/* pages too, as a public-facing planning view for customers and partners who don't have Salesforce seats. Drag a bar on the web, the patch rides back to Salesforce. That's the long arc. As of 0.184 we're most of the way there.
The four-window orchestra
The coordination pattern matters more than the architecture here, so let me slow down for a paragraph.
This isn't a multi-agent "swarm." The four Claudes don't talk to each other. They can't. Each one is a Claude Code instance with its own filesystem root, its own context window, its own memory file. Nothing they do is visible to the others until I paste it.
It's also not a single-agent-with-sub-tasks setup. The three CCs don't share memory. NG CC has no idea DH CC exists. DH CC doesn't know what file path CN CC's template vendor directory lives at. Each one is a specialist with deep focus on one repo.
The pattern is more like a small engineering team where the tech lead runs a standup between specialists by relaying written updates. I'm the network switch. HQ's job is to produce paste-ready output for each specialist — no human edit pass required. When HQ hands NG CC a packet, the format is: one-sentence scope, a link to a spec file, key decisions inline, expected build artifacts, suggested commit message. NG CC executes, reports back. I relay the report to HQ. HQ synthesizes.
A few rules emerged the hard way:
- Every message between windows is paste-ready. If HQ drafts a dispatch and I have to edit it before sending, HQ just burned my turn. That discipline turned out to be the single largest productivity unlock.
- Single-line JS only for console probes. Multi-line paste into some DevTools variants trips "Unexpected identifier" errors on seemingly-fine code. Wrap everything in an IIFE.
- Shadow DOM is the default on Salesforce, not the exception.
document.querySelectorAll('.my-thing')doesn't pierce LWC shadow roots. Any probe that needs to find something in DH starts with a recursive shadow-root walker. - Scratch-first iteration, then cut. Never cut a package without the human saying "cut it." The managed-package install flow is slow enough that a mis-cut costs fifteen minutes of real time.
- A bedtime signal. When I tell HQ it's bed time, HQ gives a terse status and stops dispatching new asks. Without that explicit flag, the orchestrator would queue work indefinitely.
- Memory as an index, not a dump. Each CC's
MEMORY.mdis a list of one-line hooks pointing at topic files. The detail lives in the topic files, not the index. Every durable fact has a home; the home is discoverable in one scan.
I don't think this pattern works for every kind of project. It works for this one because the three repos have real boundaries — different languages, different deploy cadences, different test surfaces. A single CC trying to hold all three in context would thrash. Three specialists hitting their narrow domain hard, with a fourth one holding the map, is faster. It plays to current-generation agents' strengths (focused context, deep single-domain work) instead of fighting them.
The silent swallow
On April 18 evening, a regression shows up on the Delivery Hub embedded surface. User drags a bar. The bar moves. No Apex save fires. No console log. No error.
(I'd written about a related bug on this same surface five days earlier — the one where drag-save worked visually but the task snapped back on rerender. We thought that was the last of it. It was not.)
This is the kind of bug where the first ten minutes are wasted on the wrong question. Is Apex throwing and we're swallowing the error? Is the network request being blocked? Is there a CSP rule rejecting the fetch? No, no, and no.
HQ and NG CC paired on the diagnosis. The detective work went like this.
First probe: wrap window.fetch and XMLHttpRequest.prototype.open to log every outgoing request with a [DH *] prefix. Drag a bar. Zero network activity. So the call isn't happening at all — it's not that Apex is rejecting it.
Second probe: monkey-patch the reducer dispatch in the LWC to log every action. Drag a bar. Zero dispatches. So the reducer isn't even being invoked. Something earlier in the chain is eating the event.
Third probe: add a raw console.log('[NG onTaskEditAsync] called', taskId) inside the IIFE source, rebuild, redeploy. Drag a bar. The log fires. Good — the engine's callback is reaching the app layer. Now track what happens next.
The onTaskEditAsync handler in IIFEApp.ts looks at the task id the engine sent, tries to look it up in the host's allTasks array, and early-returns if it can't find a match:
const idx = allTasks.findIndex(t => t.id === taskId);
if (idx === -1) return;
The intent was defensive. "If the engine hands us a task we don't know about, bail." The reality was a silent swallow. The engine had just drag-completed on a task whose internal state advanced before the host's _tasks array was refreshed. The guard matched. The function returned. The dispatch never fired. No error. No log.
Legacy onTaskPatch — the non-async predecessor — had always called the host regardless, even on divergence. The new async path broke that contract in pursuit of defensiveness we didn't need.
The fix was ten lines. When divergence is detected, still call the host's onItemEdit(taskId, changes) (or fall through to rawOnPatch if no async handler is wired). You lose the ability to revert cleanly — no originals captured, the optimistic state is already gone — but the host's error toast still fires, and the next mount-state poll pulls fresh data. Which is the same outcome Legacy had. The async path had given itself permission to be too clever.
Shipped as 0.183.2. The writeup for the CC that received the fix spec ran longer than the diff.
The lesson isn't "don't early-return." The lesson is: when you add an early-return to a function that used to always call a collaborator, you have silently changed the collaborator's contract. That's a breaking change disguised as a safety check. If the early-return path is silent, you have a silent breaking change, which is the worst kind.
The pull quote I keep coming back to: the bug was a silent return statement three files deep. The fix was a console.log.
The Möbius reducer
0.183.2 landed. A day later, the next regression on the CN side.
The symptom was similar — drag-save broken — but the shape was different. On CN's v12 mount, drag-save worked once. Reload the page, the edit was gone. Drag a second bar, the edit was gone too. Every drag after the first appeared to succeed visually and then vanish.
NG CC and I traced it together. The patch was reaching onPatch; a [CN onPatch] received log confirmed it. But localStorage wasn't updating after the second drag. proForma.updateDates — the thing that writes the persisted state — wasn't being called.
Strip down to a minimal repro: drag one bar. Watch the console. The [CN onPatch] received log fires. Then fires again. Then forty-seven more times. Then React prints Maximum update depth exceeded. This can happen when a component calls setState inside useEffect... and the patch silently fails.
Trace the chain:
- The drag fires
onPatch(patch)on the slot. onPatchdispatches{ type: 'PATCH', patch }to the reducer.- The reducer's wrapper — my wrapper in
NimbusGanttAppReact.tsx— was designed to forwardPATCHactions back to the host'sonPatchref. - So it called
onPatchRef.current(ev.patch). - Which dispatched another
PATCH. - Go to step 3.
The wrapper was forwarding every PATCH event to the host, including PATCH events that originated from the host calling onPatch in the first place. A Möbius strip.
Eventually the recursion depth exceeded React's threshold, React silently bailed out of the render, and the last patch never made it to the persistence layer. No thrown error. React's overflow protection is a warning, not an exception. The browser kept running. The user saw a drag that seemed to complete and then didn't persist. Every time.
The fix was a sentinel flag — _isForwardingPatch — that suppresses the recursive call for one frame. A cleaner design would have been: don't dispatch PATCH from inside onPatch at all; only dispatch when the event originated from a slot. Either works. We shipped the sentinel because it was the smaller diff to an already-shipping module, and 0.184 had a deadline.
Shipped as 0.183.4, rolled into the 0.184 bundle.
React doesn't throw on infinite recursion. It gives up silently. That's worth internalizing. In 2026, the answer to "why is my drag-save broken?" can turn out to be "you wrote a Möbius strip three weeks ago and it finally triggered under fast drags." The symptom was drag-save regression. The cause was a loop the reducer ate.
If I had one habit to evangelize from these two bugs, it's this: when a UI interaction silently no-ops, assume something is eating events. Look for early returns, silent catches, and recursion-capped dispatchers before you look anywhere else. The runtime no longer raises exceptions for these cases. You have to set your own traps.
The __cnEdit bridge
The best story of the release is this one, and it started as a debug shortcut.
Most of my diagnostic work on this codebase runs through DevTools console. I paste a one-liner, watch what happens, iterate. Over months of this, I noticed two things. First, the probes I kept writing mapped to a small set of verbs: "move this task," "change its group," "reorder," "submit the batch." Second, every one of those verbs already existed inside the app as state mutations that a drag or click triggers — I was writing bespoke JavaScript to reach them through back doors.
What if we just exposed the core verbs on window?
CN CC built the prototype. A React hook, useCnEditBridge.ts, publishes window.__cnEdit on mount with a fixed verb set: help, whoami, getState, getOverrides, moveTask, moveToGroup, reorder, setParent, editItem, submit, reset. A welcome banner prints to console on mount. The API is thin by design — every verb routes to the same state-mutation path a drag/click already uses. No new persistence surface. moveTask ends up in the same localStorage-update path a drag ends in. submit ends up in the same POST body a submit button triggers.
The result is that a console user can drive the app end-to-end without touching the UI. Paste window.__cnEdit.moveTask('wi-001', '2026-04-20', '2026-04-22') and the bar moves, the patch fires, the localStorage persists. Same code path as a drag. Same outcome.
DH CC then ported the bridge to the LWC as _installCnEditBridge() and _uninstallCnEditBridge() methods, published into window.__cnEdit from the LWC's connectedCallback. Same verb set. Every mutation routes through DH's existing _handlePatch → existing Apex. No new DML surface. If you trust _handlePatch — and we do, because a drag uses it — you trust __cnEdit.
The test we didn't know we wanted: window.__cnEdit.help() now works identically on both hosts. The output is byte-for-byte the same. A developer who learned the API on cloudnimbusllc.com can sit down in front of a Salesforce org and drive Delivery Hub from the Salesforce-domain DevTools with zero context switch.
Here's the part I didn't see coming. That same API is also how I drive the app from HQ. When HQ wants to verify a fix, it doesn't ask me to "drag a bar and check if it persists." It writes a one-line console probe:
window.__cnEdit.moveTask('wi-001','2026-04-20','2026-04-22');
Which I paste and run. HQ reads the result in my next relay. The debug affordance became a verification affordance became an automation surface. Future automation — scheduled reports, integration tests, an agent that triages incoming tickets and re-sequences them — has a single stable verb set to call.
The recipe, if you want to steal it:
- Look at what mutations a drag or click already triggers in your state store.
- Name each one with a verb.
- Publish them on
window.__yourAppfrom a React hook or LWCconnectedCallback. console.loga banner so discovery is obvious to anyone who opens DevTools.- Ship
help()that prints the API itself. Stay discoverable.
Cost: a thin hook. Yield: scripting, automation, LLM-driven UX, human debugging, and instrumented testing all share one surface. You don't have to know up front which of those uses will matter. You just expose the verbs.
What's next
0.184 shipped this week. 0.185 is mid-implementation; 0.186 is scoped but not started.
0.185 is batch-mode. Today on DH every drag is a per-patch write — one edit, one Apex DML. Batch-mode collects a series of drags into a pending set, renders the audit preview modal we shipped in 0.184 (currently dormant on DH because pendingChanges is empty), lets the user review the diff, then commits the batch as a single unit. Same modal code as CN; same verb set; zero new DOM.
0.186 is the audit tab — a first-class view on every WorkItem__c that reads Salesforce's native Field History Tracking and surfaces every committed batch with a human-readable note. "Who moved this?" gets an answer that doesn't involve spelunking through Jira or a weekly status email.
0.190 is where the two-homes story closes: patches on cloudnimbusllc.com's public planning view write back through to Salesforce as the system of record. Same IIFE. Same verb set. Different round-trip. That's the cut where "same Gantt in two homes" stops being a design aspiration and starts being a production primitive.
I'll write those arcs as they ship.
Reflection
If someone asked me what I'd change about the four-window model, I'd say two things.
I should have written the briefs sooner. brief-nimbus-gantt.md, brief-delivery-hub.md, brief-mobilization-funding.md — these are the durable "how this CC operates" documents. What files it owns, what deliverables it ships, what never to modify. I wrote them on April 17, seven days into the arc, and every session since has opened with the CC reading its own brief before touching anything. Sessions before that were slower, messier, and prone to re-asking the same "what's in this repo?" question. Memory as an index pointing to detail files, not memory as a dump, is the pattern that scales. The index lives at the top of every session; the detail files live alongside the code they describe.
I should have built __cnEdit earlier. It would have saved hours of bespoke console scripting in the preceding weeks. The reason I didn't is the reason no one builds abstractions in advance — I didn't know the shape of the verb set until I'd written the bespoke scripts enough times to see it. That's probably fine. The lesson is to watch for the moment the ad-hoc work starts repeating, and then lift it into a stable contract. Not before; not much after.
The coordination pattern worked because the three repos had real boundaries. If they were the same language, same stack, same deploy cadence, a single CC could probably have held it all in context and the orchestration overhead would be net-negative. It's the heterogeneity that makes the specialists worth the switching cost. Your mileage will vary depending on how heterogeneous your terrain is.
Four Claudes, one human, three repos, eight days. One bug eaten by a silent return. One bug eaten by an infinite recursion React doesn't raise on. One debug bridge that turned into a product bridge. Three surfaces of the same Gantt converging on one verb set.
Next dispatches will cover batch-mode and the audit tab as they ship.
Business framing of the same release — what drag-to-reschedule unlocks for Salesforce delivery teams — is on the cloudnimbusllc.com blog.
Free Tools & Calculators
Interactive tools built by Glen Bradford
Enjoyed this? Get more like it.
Glen's Musings — AI, investing, and building things. Occasional. Free.

Glen Bradford
Investor · Builder · Writer
MBA from Purdue. Former hedge fund manager. Holds 26 series of Fannie Mae and Freddie Mac junior preferred stock. Built Cloud Nimbus for Salesforce consulting. Author of Act As If. Writes about investing, building things, and the longest financial fraud in American history.
More in Life & Philosophy
Keep Exploring
Net Worth Percentile Calculator
Where do you rank financially? Find out instantly.
Read moreAct As If — Glen's Book
Talk minus action equals zero. The philosophy that drives everything.
Read moreGlen's Rules
The principles behind the investing, the building, and the writing.
Read moreAll Life & Philosophy Posts
Browse the full Life & Philosophy blog archive.
Read moreDisclaimer: This blog post reflects the author's personal opinions at the time of writing and is not financial, investment, or legal advice. Glen Bradford holds positions in securities discussed on this site. Past performance is not indicative of future results. Do your own research and consult qualified professionals before making investment decisions. Some content on this site was generated or edited with AI assistance.