Last time, I wrote about running three AIs as a one-person company. Max, the cloud soldier posting and replying 24/7. Tony, the local workshop formatting threads and managing engagement queues. Claude Code, the armory turning a 2,000-note Obsidian vault into deployable content.
The response was interesting. People understood the architecture — three agents, three lanes, no overlap. But the question that kept coming back was always some version of the same thing: how do they know about each other?
The answer is a folder.
The Problem Nobody Warns You About
When you run one AI, context is simple. You talk, it remembers (for a while), you move on. When you run three AIs on different platforms — one in the cloud, one on your laptop, one in a terminal — the default state is amnesia. Each one wakes up fresh. Each one has no idea what the other two did yesterday.
This sounds like a minor inconvenience until it isn't. Max publishes a post about pricing philosophy at 10pm. Tony, unaware, drafts a thread on the same topic an hour later. Claude Code extracts a quote that Max already used three days ago. Without shared context, three agents become three amnesiac strangers working in the same building, bumping into walls.
The standard solution is a database. Or an API. Or some memory-as-a-service layer that promises persistence across sessions. I tried variations of all three. They all share the same fatal flaw: they're optimized for machines reading machines, not for a human reading the truth.
I needed something I could open, read, edit, and trust. Something that existed as plain files I could version-control. Something where "what does Max know?" was answerable by opening a markdown file, not querying a dashboard.
So I used the tool I was already inside of. The Obsidian vault became the brain.
One Vault, Three Zones
The design is embarrassingly simple. Inside the vault, there's a directory — the lobster brigade headquarters, as I call it. Three subdirectories. One set of rules.
The first zone is shared. Files here are readable by all three agents. This is where the soul lives — a personality document that defines how replies should sound, a long-term memory file that records what the system knows about me, my products, and my operational state. There's a rescue handbook for when things break. There's a research file on memory architecture itself, notes from studying how others solved this problem.
The soul snippets file is the most important shared asset. Twenty-two entries, organized by worldview — pricing philosophy, competitive strategy, product truth-testing, timing, the relationship between action and judgment. Each entry has a short form, an expanded form, and a question form. When Max generates a reply under someone's tweet, it draws from these snippets. When Tony formats a thread, the voice stays consistent because the source is the same file. When Claude Code produces new ammunition, it checks against existing snippets to avoid duplication.
The personality isn't in any model's weights. It's in a markdown file that all three can read.
The second zone is per-agent. Max has his own directory. Tony has his own. Each agent has full authority over their space and zero write access to the other's. Max stores his operational SOP, his daily state, his content pipeline drafts, and his performance reports. Tony stores his memory logs, his article drafts, his reply watchlist. The boundary is enforced by convention, not code — but the convention is absolute.
This separation matters more than it looks. When Max writes a daily report — a detailed breakdown of how many replies went out, which keywords dominated, which posts performed, what failed — that report lands in his directory. Tony can read it, but Tony doesn't touch it. When Tony records what happened during a reply sweep — which accounts were targeted, what templates worked, what the engagement pattern looked like — that log stays in Tony's space.
The result is an audit trail that doesn't lie. If something goes wrong, I don't debug a tangled state machine. I open a folder and read what happened, written by the agent who was there.
The third zone is staging. This sits outside the brigade headquarters, in the content operations directory. It's the loading dock — the place where Claude Code drops finished ammunition and I decide what gets shipped.
How Ammunition Flows
The flow is deceptively linear.
It starts in a deep Claude Code session. I point it at a section of my vault — maybe a product philosophy note written in Chinese, maybe a batch of MBA concept extractions, maybe a fragment from my personal decision framework. Claude Code reads, analyzes, and produces. A batch of golden quotes, compressed to under 280 characters each, sourced from a specific document, annotated with the original Chinese and a recommended use case. A set of reply ammunition organized by worldview category. A thread draft. An article outline.
All of this lands in the staging area. Right now, there are thirteen files waiting there — daily content schedules for the next week, a batch of ten golden quotes drawn from my product rules and my classical philosophy notes, a set of reply ammunition with twenty-two entries mapped to specific engagement scenarios, thread drafts, article drafts, a publishing log, and a weekly content calendar.
Nothing in staging is live. Everything is a proposal.
I open the files. I read them the way an editor reads submissions — not asking "is this correct?" but "is this something I would say?" The IP Consistency Guard runs in my head, not in code. Three checks. Does this teach, or does it share? Does it sound like me on a Tuesday night, not like a motivational poster? Would I want this still online in six months?
Most content survives. Some gets edited. Some gets killed. The ones that pass move into the machine.
Approved quotes get injected into Max's 30-day content pipeline — a CSV file that maps every post to a date, a time slot, and a format. Max reads it every night at 10pm Beijing time and publishes the day's first post with a Grok-generated image. The second post hits at 11:10pm. The third at midnight. Three posts a day, thirty days, no human intervention after the initial approval.
Approved reply ammunition merges into the shared soul snippets file. Next time Max wakes up for an hourly reply sweep — searching X for conversations about AI agents, indie hacking, Claude Code, solo development — he draws from an expanded arsenal. The voice stays consistent because the source document is the same one Tony reads when formatting threads.
Thread and article drafts go to Tony's article directory. Tony publishes them on command through Discord — I type a trigger, Tony formats and posts. Or I take them and publish directly as X Articles under my own hand.
One deep session. Weeks of output. The compounding happens because the staging area connects a one-time extraction to two always-on distribution channels.
Why the Human Is the Only Gateway
There's a design decision buried in this architecture that looks like laziness but is actually the load-bearing wall.
No agent can write to another agent's input. Claude Code cannot inject quotes directly into Max's pipeline. Max cannot pull from Tony's article drafts. Tony cannot modify the shared soul snippets without me seeing the diff.
I am the only write path between agents.
This is not because the automation would be hard. It would be trivial. Claude Code could append to a CSV and push to GitHub. Max would pick it up on the next git pull. The pipeline would be fully autonomous.
I don't do it because the failure mode is invisible.
When an AI produces content that's 90% aligned with your voice and 10% off, the 10% doesn't announce itself. It seeps. A slightly generic turn of phrase. A take that's technically yours but emotionally someone else's. An observation that sounds smart but isn't something you've actually earned through experience. Over time, the drift compounds. Your public voice slowly becomes a statistical average of your past voice — which is exactly what a language model produces when it's doing its job well.
The human gateway is a drift detector. Every piece of content that crosses from one agent to another passes through a pair of eyes that can feel when something is off, even if it can't always articulate why. That feeling is the product. The agents produce candidates. I produce decisions.
What Breaks
The memory layer is not elegant. It works, which is different.
Here is what has actually gone wrong.
Stale reads. Max runs on a cloud server. Tony runs locally. Both sync through GitHub. The sync is not instant. Max pushes a state update, but Tony's git pull hasn't run yet, so Tony reads yesterday's state and makes decisions based on obsolete data. This has caused duplicate topic coverage exactly twice, both times caught in review, neither time caught by the system.
Context explosion. Max's long-term memory file is approaching a hundred lines of operational history — product details, X algorithm insights, technical architecture, cron schedules, lessons learned. When Max's session compresses and he reloads context, the memory file competes with the current task for attention. The more he remembers, the less focused he becomes. Memory has a cost, and the cost is bandwidth.
Memory format drift. Max and Tony adopted a three-layer memory architecture — daily logs, weekly consolidation, long-term memory. The theory is clean: daily events get recorded, weekly reviews distill patterns, long-term memory gets pruned. In practice, the daily logs vary wildly in quality. Some days are dense operational records. Some days are one line saying nothing happened. The weekly compound step inherits whatever noise the dailies contain.
The ghost of expired data. Soul snippets don't have expiration dates. A reply template that was sharp three weeks ago might reference a trend that's already dead. There's no mechanism to flag staleness other than me noticing — and I don't always notice.
Concurrent write conflicts. Max pushes to GitHub from the cloud. Tony pushes from the local machine. Claude Code pushes from the terminal. If two push within minutes of each other, someone's changes get rebased. Usually this is fine. Once, a state file got silently mangled because two agents wrote to adjacent lines in the same JSON. Nobody fabricated data. The merge just produced invalid syntax. It took forty minutes to diagnose because the error surfaced three git pulls later.
None of these are fatal. All of them are annoying. The shared brain works not because it's well-engineered, but because the failure modes are all recoverable and the human checkpoint catches what the system misses.
What the Memory Layer Actually Produces
There's a question underneath all of this that I haven't answered yet: is it worth it?
I could run three independent AIs with no shared context. Each one gets its instructions fresh every session. Each one produces output in isolation. I review, I publish, I move on.
The difference is coherence.
When Max replies to someone's tweet about AI pricing, and the phrasing echoes a golden quote that Claude Code extracted from my product philosophy three weeks ago, and that quote was originally written in Chinese about something I learned from Sima Qian's economic chapters two thousand years old — that is not three AIs working in parallel. That is a knowledge supply chain with a shared vocabulary.
The vault is not a database. It's a language. The shared files don't just store information — they establish the terms that all three agents think in. When the soul snippets say "price is a filter, not a courtesy," Max uses that framing in replies, Tony uses it in threads, and Claude Code uses it when deciding which new quotes to extract. The system converges on a voice not because I micromanage every output, but because the inputs are unified.
That's what a shared brain actually means. Not a technical architecture. A consistent way of seeing the world, distributed across agents who never directly talk to each other, mediated by plain text files that a human can read and edit with a text editor.
Thirteen files in a staging area. Twenty-two soul snippets in a shared document. Two agents with separate memory directories that they maintain autonomously. One human gateway that approves every crossing.
It's held together by markdown files, git commits, and the stubbornness of a person who'd rather read a diff than trust a dashboard.
I'm @UncleJAI. I run three AIs from an Obsidian vault and document the parts that don't fit in a product demo. If you're building memory layers for your own agents, the lesson is simpler than it sounds: start with a folder.

