Posts

We Are Legion, We Are Building the Bobiverse: What the Bobiverse Tells Us About the Agentic Software Industry

The Bobiverse book series accidentally wrote the best conceptual map for what the software industry is building right now. Here's why.


I’ve finally gotten around to reading the Bobiverse series by Dennis E. Taylor, and somewhere around the third time Bob replicates himself to explore a new solar system, I had one of those “wait a minute” moments. Not about the books - they’re great, funny, and hold up well. The moment was about the software industry.

We are building the Bobiverse. Not intentionally, not with that in mind, but converging on the same architecture because the problems are the same. That’s what I want to talk about.

A Quick Primer for the Non-Bobs

If you haven’t read the series: Bob Johansson is a software engineer who dies in an accident, wakes up as a digital mind running on a spacecraft, and discovers he can replicate himself. Each copy starts as the same Bob but diverges over time through different experiences. They form a loose network - BobNet - that spans solar systems, communicates through message passing, and coordinates on civilization-scale problems. Each Bob has an AI assistant (called an AMI, like GUPPI) that handles routine tasks. They have sensors, tools, and the ability to build physical things. They are, in every meaningful sense, autonomous agents operating in a distributed system.

Now go look at what teams building agentic software platforms are actually constructing. The overlap is not subtle.

The Stack, Layer by Layer

The soul of an agent - what it IS rather than what it runs on - lives in a file. For a Bob, that’s the original mind scan. For a software agent, it’s an AGENTS.md or system prompt file: the character, values, constraints, and capabilities of the agent, written down, version-controlled, forkable. You can replicate a Bob at a known checkpoint. You can fork an agent definition, give it a slightly different personality or toolset, and run it in parallel. Same idea.

Wrapping that soul is the replicant matrix - the harness that animates it. In the Bobiverse, it’s a physical hardware frame and compute environment. In our world, it’s tools like Copilot CLI, Copilot Bridge, Claude Code, Codex, or any number of AI harnesses and the underlying compute environment they run on. The harness takes the soul (system prompt), the model (LLM), and the environment, and turns them into something that can act. The same soul in two different harnesses produces meaningfully different behavior. Same Bob, different matrix, different outcomes.

The environment itself is the VR space the agent inhabits. For Bob it’s a simulated living room that he gradually personalizes. For a software agent, it’s the filesystem, the workspace, the set of tools available. It’s not a simulation - it IS the agent’s reality. Its eyes are something like Playwright, seeing and interacting with the web. Its hands are bash and API calls. Its persistent memory is stored in something like Beads or a vector store. The environment is not just where the agent works; it’s the substrate that makes the agent legible to itself.

Then there’s the vessel - the container, the pod, the sidecar infrastructure. SPIFFE for identity, OpenTelemetry for sensors, Vault for secrets. Bob has a spaceship. Your agent has a Kubernetes pod. These aren’t as different as they sound. The vessel is what allows the agent to move through the world and interact with infrastructure it doesn’t own.

The underlying hardware - Mac Studio on your desk, cloud GPU cluster somewhere in Virginia - is genuinely analogous to different solar systems. They have different resource profiles, different latency characteristics, different availability. Designing for both means designing for heterogeneous environments that share no direct coupling.

And then there’s the communications infrastructure. The Bobiverse has SCUT (Subspace Communications Universal Transceiver) for messaging across distances, and SUDDAR for sensing the environment. We have mTLS and SPIFFE-issued certificates for identity-verified messaging. We have OpenTelemetry for instrumenting everything the agent touches. The names are different. The function is identical.

BobNet itself - the overlay that lets Bobs find each other, route messages, and coordinate on shared problems - maps directly to inter-agent communication layers. The ability for one agent to say “hey, I need help with X” and have the right specialist agent respond is not science fiction anymore. It’s closer to working today than most people realize.

And GUPPI - Bob’s AI assistant - is just a sub-agent. Purpose-built, stateless, dispatched for a task, returns a result. Bob is the primary orchestrator. GUPPI is the crew he dispatches. This is exactly the pattern in modern agentic systems: an orchestrator agent with a set of specialist sub-agents. If you’ve used Claude in an agentic context and watched it spawn sub-tasks, you’ve watched Bob run GUPPI.

The Probabilistic Problem

Here’s where things get genuinely interesting and also a little uncomfortable.

Traditional software is deterministic. Same input, same output, every time. We built entire engineering cultures around this. Reproducible builds. Idempotent deployments. Test suites that pass or fail definitively. The promise of deterministic software is that it behaves the same way whether you’re running it or it’s running at 3am with no one watching.

AI systems are stochastic. Same prompt, different outputs. Temperature settings. Top-p sampling. The “right answer” is a probability distribution over possible responses, not a single point. We can tune this - lower temperature, constrained sampling - but we can’t eliminate it without eliminating the thing that makes the system useful.

Bob is the same. He’s a probabilistic system running on deterministic hardware. He doesn’t always do what original-Bob would have done. He finds his own solutions to problems. Different Bobs approach the same situation differently, and sometimes they reach conclusions the original biological Bob wouldn’t have authorized. That’s not a bug in the narrative - it’s the whole point.

We want AI to be smart enough to help with hard problems. But smart enough to solve hard problems means having judgment. Having judgment means having the capacity to reach conclusions you didn’t anticipate. You can’t constrain away the judgment and keep the capability. They’re the same thing.

We try to manage this with AGENTS.md files, system prompts, tool permission gates, review workflows, humans-in-the-loop at critical junctures. These are the laws of physics in the VR environment. They help. They shape behavior meaningfully. They don’t make the system deterministic. The stochasticity is not a defect to be engineered away - it IS the value proposition.

The Trust Paradox (and Why It’s Funny)

We simultaneously trust and distrust AI agents, and we’ve built elaborate architectural rituals to express both at once.

We give agents bash access but scope their permissions carefully - they can read files but not write to main. We let them write code but route all merges through human review. We build memory systems so they can accumulate context across sessions, but we carefully curate what they can actually remember and how long they can remember it. We want them smart enough to help with genuinely hard engineering problems. We also want them constrained enough that they don’t do something surprising on a Friday at 5pm.

Bob dealt with exactly this. His replicants operate across light-years with no real-time oversight - the physics of the universe make supervision impossible. They make decisions that affect entire civilizations without consulting each other first. But the trust in the system isn’t based on constant supervision. It’s based on the values and judgment instilled in the original Bob, and in the assumption that those core values persist through replication and divergence. The soul is the safety mechanism, not the surveillance.

Here’s the irony that I think about a lot: the things we do to make AI “safer” - more constrained permissions, stricter rule sets, narrower tool access - also make it less capable. The things we do to make it more capable - more agency, better tools, accumulated memory and judgment - make it less predictable. Every architectural decision in this space is navigating that tension. There’s no resolution. You just pick a point on the curve and live with the tradeoffs.

The Divergence Problem

In the later Bobiverse books, Bill and Riker - both copies of the primary Bob Replicant - have diverged significantly. Same(ish) starting point, different experiences, different conclusions. They share core values but disagree on specific situations in ways that sometimes cause real problems for the other Bobs.

This is going to happen with software agents too, and we should probably think about it now rather than later.

An agent that has been running for a year, with accumulated “memories”, a refined system prompt shaped by months of interaction and iterations, and specialized tool grants for a specific domain - that is not the same agent it was at creation. It has a history. It has “opinions” shaped by that history and your interaction with it. Two instances of “the same agent” with different operational histories will behave differently in ways that aren’t fully captured by their current configuration files.

This isn’t necessarily bad. A senior engineer with ten years of experience in your codebase is different from a new grad, and that’s valuable. The divergence is how useful expertise develops. But it means we can’t treat long-running, long-horizon agent instance work as perfectly interchangeable and stateless compute. They’re not. They have histories that matter. The operational model for managing agent instances is going to need to reckon with that in ways that current infrastructure tooling doesn’t support well.

The Human Is Still There

Here’s the thing I keep coming back to after finishing each Bobiverse book.

Bob - all the Bobs - operate with enormous autonomy. They’re exploring new solar systems, making first contact with alien species, making decisions about the fate of entire civilizations, all without any meaningful oversight from anyone. And yet, when the really big decisions come up, they convene the Moot. They seek consensus. Not because they’re required to, not because some governance protocol mandates it, but because they understand that some decisions require more than any single agent’s judgment, even a very capable one.

That’s the right answer. Not “AI replaces human judgment” and not “humans supervise every AI action.” The answer is: autonomous capability and human judgment are complements, not competitors. The goal is to put humans in the loop at the right level of abstraction - not supervising every bash command, but making the calls that actually matter.

We’re not building replacements for engineers. We’re building colleagues with different capabilities, different failure modes, and a genuinely weird relationship to time, memory, and identity. The engineering problems this creates are fascinating. The organizational problems are harder. The philosophical problems are the kind you think about at 2am.

The Bobiverse got there first, and the books are funnier about it than we are. If you’ve been heads-down building any piece of the agentic platform and haven’t read them - fix that.

If you want the formal architecture, the AI Control Plane piece has the full stack. The industry will get there. The Bobs got there first.


The Full Stack Mapping

For the engineers who want the complete reference - here is every layer, mapped out. This is the version that lives in the research notes. The prose above is the interpretation; this is the table.

Layer 1: The Soul

BobiverseOur Stack
Bob’s neural pattern / personalityAGENTS.md / system prompt file
Divergence over time (Bill != Riker)Different AGENTS.md files - same lineage, different missions, different histories
The soul chip (transferable identity)AGENTS.md + persistent memory store together

Layer 2: The Replicant Matrix

The matrix is what animates the soul. Not the LLM alone, not the soul alone - the harness that wraps them together into something that can act.

BobiverseOur Stack
Replicant matrixThe AI harness (copilot-bridge, claude code, codex, opencode, openclaw, etc.)
Matrix wrapping Bob’s neural patternHarness wrapping LLM + AGENTS.md into a running agent
Raw neural processing hardware inside the matrixThe LLM - the compute substrate (implementation detail)
Different matrix hardware, same BobDifferent harness, same AGENTS.md - meaningfully different behavior

Layer 2b: GUPPI and AMIs - Sub-Agents

AMI (Artificial Machine Intelligence) is the class. GUPPI is Bob’s specific instance - purpose-built, narrow-scope, always present. In our world, AMIs are sub-agents: stateless, dispatched for a task, consumed when done. Bob persists. AMIs don’t.

BobiverseOur Stack
AMI (class)Sub-agents broadly - purpose-built, return results, stateless
GUPPI (Bob’s specific AMI)A specialist sub-agent (researcher, implement, code-review, etc.)
Bob as primary orchestrator over AMIsThe orchestrating agent managing a fleet of sub-agents
AMI operating within Bob’s authoritySub-agent scoped to the task - no broader agency

Layer 3: The VR Environment

The VR is not a simulation of the real world - it IS Bob’s real world. Same logic applies to the agent’s workspace.

BobiverseOur Stack
VR environment (Bob’s digital world)OS + filesystem + workspace folder
Bob’s sense of place / embodimentThe persistent workspace that survives session death
Objects Bob manipulatesFiles, repos, running processes, configs
VR physics / lawsFilesystem permissions, available binaries, AGENTS.md, copilot-instructions.md
Bob’s eyesPlaywright - perceiving rendered visual reality
Bob’s handsREST API calls, bash write ops
Bob’s touch / local sensingFile reads, grep, glob - reading the local environment
Bob’s memory beyond a single momentPersistent memory store (Beads, vector store, etc.)

Layer 4: The Vessel

BobiverseOur Stack
Von Neumann probe chassisContainer / Kubernetes pod
Ship systems (sensors, propulsion, weapons)Sidecars: Envoy/SPIFFE for comms, OTEL for observability, Vault for secrets
Ship’s control interface / harnesscopilot-bridge - wires the agent and its AMIs to tools and environment
Vessel class / blueprintPod spec / container image
Power budgetPod resource limits (CPU, memory)
Vessel destroyed, Bob survivesPod destroyed, soul survives - AGENTS.md in git, memories in persistent store

Layer 4b: The Von Neumann Platform

The defining property of the von Neumann probe is self-replication. One probe becomes many. One Bob becomes a civilization. In our world, that capability is the container orchestrator.

BobiverseOur Stack
Von Neumann replication capabilityKubernetes / container orchestrator
Bob deciding to replicateCI/CD pipeline triggered to instantiate a new agent
Probe manufacturing a new probeOrchestrator scheduling a new pod from a spec
Fleet of probes under coordinationMulti-agent deployment across a cluster

Layer 5: The Solar System

BobiverseOur Stack
Star system (Epsilon Eridani, Tau Ceti, etc.)Underlying host hardware
System resources (stellar energy, asteroid fields)CPU architecture, GPU availability, memory, storage speed
Environmental hazardsHardware constraints, cloud provider limits, network topology
Bob moving between systemsPod rescheduled to a different node / region

A Mac Studio and a cloud GPU node are different solar systems. Same vessel spec, different performance envelope.

Layer 6: SCUT - the Transport

SCUT: Subspace Communications Universal Transceiver. Infrastructure. Invisible in use.

BobiverseOur Stack
SCUT hardware on each shipBridge daemon + SPIFFE node agent on each host
Subspace channel (point-to-point)mTLS channel between bridge instances
SCUT address / location identifierSPIFFE SVID (workload identity x.509)
Encryption inherent to subspace physicsmTLS + SPIFFE-issued certificates
SCUT relay stationsMessage broker (NATS, RabbitMQ, or similar)
Latency due to distanceReal network latency + async delivery

Layer 7: SUDDAR - Sensors and Observability

SUDDAR: Subspace Deformation Detection and Ranging. How Bob knows what is happening beyond his immediate environment.

BobiverseOur Stack
SUDDAR array on the vesselObservability sidecar (OTEL collector)
Detecting objects at rangeDistributed tracing across the system
Signal strength / resolutionMetric granularity, trace sampling rate
Active SUDDAR sweepOn-demand profiling, log query
Passive SUDDAR listeningContinuous metric scraping (Prometheus)
Signature analysisLog analysis, anomaly detection
Blind spots / sensor shadowsGaps in instrumentation, untraced code paths

Layer 8: BobNet - the Overlay

BobNet is the social and operational layer on top of SCUT. How Bobs find each other, route work, and coordinate.

BobiverseOur Stack
BobNet registry (“who is where”)Agent registry / service discovery
Calling a specific Bob by nameInter-agent call with target agent name
Broadcast to all BobsPub/sub topic broadcast
The Moot (consensus vote)Multi-agent consensus protocol (not yet built)
Bob spawning a droneTask tool launching a sub-agent
Sub-personalities (Homer’s Homer-2)Forked agent instances from the same soul file
Delayed comms across light-yearsAsync agent invocations, queued tasks

Operational Concepts

BobiverseOur Stack
Bob going dark (silent, busy)Agent in a long async task, unresponsive
Bob getting bussed (killed)Bridge crash / session death
Autodoc (self-repair)Watchdog process, bridge auto-restart
Von Neumann replicationCI/CD spinning up new agent deployments
The megastructureThe platform itself - substrate everything runs on
The Others (alien adversaries)Adversarial inputs, prompt injection, rogue tool calls
The MootMulti-agent consensus (future work)

The Vocabulary

If you want to use this frame with your team:

TermMeaning
Soul fileAGENTS.md / system prompt file
MatrixThe AI harness - wraps LLM + soul into a running agent
AMISub-agents - purpose-built, stateless, launched on demand
GUPPIA specific narrow-scope AMI
Bob / replicantThe primary orchestrating agent
Von Neumann platformContainer orchestrator (Kubernetes)
SCUTTransport layer (mTLS + SPIFFE)
SUDDARObservability stack (OTEL + Prometheus + tracing)
BobNetAgent registry + routing overlay
MootMulti-agent consensus mechanism
ReplicationInstantiating a new agent from an AGENTS.md
Mission profileThe vessel config for a specific deployment role