Skip to content

My Obsidian Is Not a Note-Taking App — It's an Operating System

Feb 26, 2026

Apps are temporary. Files are forever.

That's not my line. It belongs to Steph Ango, the CEO of Obsidian. He calls it "File over App." The idea is simple: on a long enough timeline, the files you create will outlast every tool you used to make them.

Think about what that means in practice. Evernote was once the undisputed king of note-taking — valued at a billion dollars in 2012, backed by serious money, used by millions. Today it runs like a system on life support. Roam Research had a moment, a genuine cult following, then the community went quiet. Notion is a genuinely good product, but your data lives on their servers. The day they reprice their plans, cut a feature you depend on, or simply go under, your notes become hostages.

A plain .md file is different. Pure text, Markdown format. It could be read on a computer from the 1960s. It will be readable on whatever machine exists in 2060. It doesn't need any particular app to open. It can't be held ransom by a product manager's "UX optimization." It just sits there on your own hard drive, quietly yours.

The first time I read Ango's line, I stopped for a few seconds.

Not because it was profound. Because it captured, in one sentence, everything I had been building for the past two years without quite knowing how to explain it.

506 git commits. Every one a heartbeat of the system — with a diff, a timestamp, a complete record of what changed and why.

85-plus AI Skills. Each one a dedicated automation pipeline, from transcribing meeting recordings to distributing content across platforms, from extracting knowledge out of textbooks to generating production-ready code.

3 lifecycle hooks. Session starts, the system pulls the latest changes automatically. Session ends, it writes a log. In between, it handles rebasing across three concurrent writers on the same main branch — and never loses a byte.

435 auto-generated session logs. Every one a black box of human-machine collaboration: who did what, when, and what came out the other side.

7 platforms, one publish. One article written, and the system splits it into WeChat long-form, X threads, Xiaohongshu cards, LinkedIn, Zhihu, WeChat Moments copy, and short-video scripts — not reformatted, but genuinely adapted. Different tone, different length, different call to action for each platform.

One person. Under 60 days.

I'm not sharing those numbers to impress you. That's not the point. The point is that every byte behind those numbers lives in .md files on my own hard drive. Not on anyone's server. Not dependent on any application's continued existence.

These numbers are exhaust fumes. They're what you see when a system is running. Walk past a factory floor, check its monthly output report, and you don't think the factory is bragging — you just see evidence that it's operating.

My Obsidian works the same way. It is not a notebook. It is a factory.

Raw material flows in from everywhere — meeting recordings, lecture videos, web clips, half-formed ideas scribbled at midnight. It moves through a production line: transcription, classification, distillation, structuring. Finished goods come out the other end: articles, tweets, flashcards, code, products. Nothing gets manually carried from one stage to the next. I stand at the control panel and tell the system what to do next.

The files are mine. The system is mine. The AI is just passing through.

That is a belief. The moment you decide which software to use for your notes, you are making a decision about what you believe in. You're choosing whether you trust the cloud or trust your local machine. Whether you trust an application or trust a file format. Whether you trust a company's promise or trust the plain text on your own hard drive.

I chose the latter. Then I built an entire operating system on top of that choice.


An HR Guy's Operating System

Let me tell you the absurd part first.

I came up through HR. Eighteen years in human resources, organizational design, talent strategy. Not a programmer, not an engineer, not a computer science background. When I say I can't write code, I mean it literally — not as false modesty. If you ask me to open a terminal and type a command, I have to ask Claude what to type first. If you show me a block of Python, I can probably guess what it does, roughly. But ask me to write a function from scratch? It's not happening.

That is the person who built what is arguably one of the most sophisticated personal Obsidian + Claude Code workspaces in existence.

What runs inside it?

An automated content production line. From topic selection through drafting to multi-platform distribution, the whole chain runs without me touching it manually. I write one long-form article, and the system automatically breaks it into X threads, Xiaohongshu cards, LinkedIn posts, WeChat Moments copy, and short-video storyboards. Not copy-paste with light reformatting — actual platform adaptation. Different tone, different word count, different structure, different CTA for every channel.

A knowledge extraction pipeline for 53 MBA courses. The full curriculum from Peking University's Guanghua School of Management, each course broken into three tiers — conceptual layer, skill layer, application layer — with knowledge cards auto-generated and bidirectional links automatically established. That's the kind of thing that would take a full semester the traditional way. The system ran through all of it in three days.

A five-engine X/Twitter operation. Trend-surfing engine, original content engine, thread engine, engagement engine, data engine — five lines running in parallel. Every day it automatically surfaces trending topics, generates ready-to-post material, schedules publishing, and feeds the analytics back into the loop. Eight thousand followers, built without manually crafting every single post.

A smart-routing transcription system for meeting recordings. Drop in a recording, and it transcribes automatically, then routes automatically based on what it is. Client interview? It goes to a CRM template. Internal retrospective? It surfaces action items. Lecture recording? It feeds the knowledge extraction pipeline. No manual classification needed. The system knows where things belong.

How extreme does this get? One number.

The SFA AI Sales Coach project. Starting from a single client meeting recording, all the way to a production-ready, full-stack SaaS application. 52 feature modules, all delivered. 2,716 tests, all green. 105,000 lines of code.

Four days. One person.

A traditional development team would estimate that project at seven to nine person-months. That's seven engineers for a month, or one engineer for most of a year. I finished it in four days. Not a demo. Not a prototype. Production code, tests passing, ready to deploy.

One person's knowledge factory, producing at the density of a ten-person team.

This is not talent. I'll say it again: this is not talent. I cannot write code. I'm the person who has to ask the AI what to type in the terminal.

This is design.

And the starting point of that design is a decision most people spend less than five minutes on — which software to use for taking notes.

Most people don't hesitate at that step. They download Notion, or open their phone's Notes app, and they start writing. Fast, convenient, zero friction. But if you spend three extra days at that step — really thinking through who owns the files, where the data lives, whether it can be accessed programmatically, whether it can collaborate with AI agents — the difference in your productivity three years later is not marginal. It's a different order of magnitude.

I took those three days. I chose Obsidian. Then I spent two years turning it from a note-taking app into an operating system.

The rest of this is how.

Chapter 2: Why Obsidian

I'm not here to do a tool comparison.

This isn't a "which one is better" question. It's a "which one can become an operating system" question. Those are different questions, and they have different answers.

But before I get there, I want to start somewhere more fundamental.


Steph Ango — Obsidian's CEO — wrote something that stuck with me. He called it: design is compromise.

No product can please everyone. When you try to do everything, you end up excelling at nothing. The mark of good design is a clear, unapologetic statement about what you're not going to do. Photoshop doesn't write code. Excel doesn't do graphic design. They made those sacrifices deliberately, and they're great because of them — not in spite of them.

Obsidian has no WYSIWYG editor. No real-time collaboration. No built-in database views. No drag-and-drop Kanban. On a feature checklist, it loses to Notion by a hundred items.

That's exactly what makes it right.

Making one thing easy always makes something else more expensive. Obsidian chose to make owning your own data easy. The cost is that you need to learn Markdown. You have to manage your own files. You have to live without the pretty toggle blocks and the polished table views.

That tradeoff is the most correct product decision I've ever seen.


Here's why it matters in practice.

Obsidian's foundation is local files. Every note is a .md file sitting in a folder on your hard drive. That sounds unremarkable. It changes everything.

Local files mean Git-friendly. Git-friendly means version control. Version control means I can have three separate endpoints writing into the same repository simultaneously — Claude Code running locally, a Discord Bot living in the cloud, and GitHub's web editor on my phone.

Three endpoints. One main branch. Pull, push, rebase. Conflicts get resolved. Unresolvable conflicts stop and ask me.

The chain of logic feels obvious once you see it. But trace it back and you realize: it only works because the starting point is a file on your disk. Not a row in someone else's database. Not a JSON object in a SaaS backend. A file you can open with any text editor that existed before Obsidian and will exist long after it.

From that starting point, every other possibility becomes available.

Notion can't do this. Notion's data lives on Notion's servers. You can write to it via API, but you're rate-limited, format-constrained, and fundamentally dependent on their architecture. Concurrent three-endpoint writes? That's not what Notion was designed to be. And that's not a criticism — it's an architectural reality. Notion's design choices made it an excellent application. Those same choices prevent it from becoming an operating system.

Logseq sits on local files too, so it passes that first test. But Logseq is outline-first. Its entire worldview is "everything is a bullet point." I write long-form. Three-thousand-word X Articles. Fifteen-hundred-word essays. Five-thousand-word project documents. Using an outline tool for long-form writing is like using a screwdriver as a hammer — technically possible, structurally wrong.


There's something deeper going on here, and it's worth slowing down for.

Notion's data promise is written in their terms of service. Obsidian's data promise is written in the file format itself.

What's the difference? Terms can change. A new CEO can change them. A new funding round can change them. An acquisition can change them. You've seen those "we've updated our privacy policy" emails. Every one is a quiet redefinition of a promise that was made to you.

But a .md file doesn't change. It's plain text. It was readable on computers in the 1960s and it'll be readable on computers in the 2060s. It doesn't need any company's terms to guarantee your data sovereignty. The file format is the guarantee.

Steph Ango calls this a self-enforcing promise. If the file is under your control, stored in an open format, you can use it in any other application at any time. Not "export your data." The same file. Your data never left you in the first place.

"You can export your data" versus "your data was always yours."

The gap between those two sentences is the gap between a product and a platform. It's also the gap between a business model that serves you and a business model that has structural incentives to capture you.


Markdown's plain text format has one more advantage that was underrated in 2024 and became infrastructure-level by 2026.

It's LLM-native.

ChatGPT outputs Markdown. Claude outputs Markdown. Gemini outputs Markdown. When your knowledge base is Markdown files, AI can read it, write to it, and edit it without any translation layer. No format conversion. No API adaptation. No parsing logic to strip rich text before feeding it to a model.

Your knowledge base and your AI speak the same language.

When Claude Code reads my .md files directly — understands the folder structure, makes decisions based on the YAML frontmatter, navigates the wikilinks — Markdown stops being a format choice. It becomes the communication protocol for human-AI collaboration. Every note is already in a shape the model can reason about. The knowledge base isn't just AI-readable. It's AI-usable.

This isn't a coincidence. It's infrastructure.


I should say something about Obsidian the company, because it matters.

Obsidian is 100% user-supported. No ads. No VC. No data monetization.

Stop and sit with that sentence for a moment.

Big tech companies subsidize their software through advertising revenue and enterprise contracts. This creates an illusion that all software should be cheap or free. But the subsidy has a price. The price is data hoarding, customer lock-in, and a system where you believe you're the user but you're actually the product.

Steph Ango put it plainly: venture-backed software has a five-year horizon, not a decades-long one. The incentive structure isn't designed to take care of you for the next thirty years. It's designed to hit a growth metric for the next board meeting.

I paid $25 for an Obsidian Catalyst membership. Not because I needed the early access features. Because I wanted to vote with money. I wanted to support a product philosophy I trust. I wanted to be, in Steph Ango's words, well taken care of.

That phrase — "well taken care of" — is something he talks about often. When a creator has a clear worldview and pays attention to every detail, trust builds naturally. Not because the product is "professional" or has good NPS scores. Because there's a coherence to it. Every decision serves the whole. You can feel it in the product, even if you can't articulate exactly where.

In the years I've used Obsidian, I have never felt like a growth metric. Never received a push notification trying to upsell me. Never been nudged toward behavior that serves the product more than it serves me. That feeling is rare. I've learned to value it.


At this point I need to be clear about something: I am also a paying Notion customer. Present tense.

I'm not the type to tear one tool down to lift another up. Notion is an excellent product. I've used it to manage complete project spaces. When I did my MBA at Peking University's Guanghua School of Management, every piece of source material from all 53 courses — lecture slides, handwritten notes, transcripts — lived in Notion. When I later ran AI-assisted knowledge extraction on that material, Notion was the data source. I even pulled a Notion API Token, connected it through MCP, and had Claude Code directly query my Notion databases.

Notion is good. But it's a different kind of good.

In my system, Notion's role is specific: team collaboration, project Kanban, external delivery portals. It's the interface between me and the outside world. It's where I share things with clients. It's where projects get managed with other people. It does those things better than any other tool I've used.

But it's not my single source of truth. I don't build my core knowledge system on someone else's servers.

This isn't a critique of Notion. It's role separation.

A hammer and a screwdriver are both good tools. But you don't use a hammer to drive a screw. Notion is a superb hammer — team collaboration, database views, beautiful Kanban, it does all of that better than anyone. But when what you need is a fully-owned, programmable, local-first knowledge operating system, you need the screwdriver.

I use Notion for projects. I use Obsidian for my brain.

Both get paid for. Both get used. The question was never which one wins. The question was always which job each one is hired to do.


But I want to go one level deeper. Not about which tool is better. About who owns the understanding.

Steph Ango described certain software business models using a metaphor that made me uncomfortable in the right way. He called it a parasitic pattern. Three stages.

Stage one is acceptance. You're encouraged to outsource everything. "We'll manage it for you." This is fine — it's genuinely helpful. You delegate the complexity and get something valuable in return.

Stage two is extraction. You start paying to solve problems the system created. "Upgrade to Team plan to unlock collaboration." The problems feel like your fault. The solutions cost money.

Stage three is intervention. You've reached a point where getting your own data back costs you. "Export requires the enterprise tier." You pay to return to where you started.

Education doesn't monetize. Extraction and intervention do.

I want to be precise here: this is not about Notion. This is about a structural pattern in any business model where your data lives on someone else's servers. The architecture allows for this drift. Not every company follows the path. But the gravitational pull exists, because the incentive structure permits it.

Obsidian's architecture prevents it from the root. The file is yours. The format is open. There is no mechanism by which the company can hold your notes hostage, because your notes were never in their custody.

Using Obsidian means you understand your own knowledge system. Every folder is one you built. Every tag is one you defined. Every link is one you made. The system is, from top to bottom, a map of your own mental model. Nobody understands it better than you. No product manager's "optimization" can improve it.

Don't outsource understanding.

Those four words are worth ten thousand more. The entire chapter is really just an extended argument for them. Your knowledge isn't a feature of some platform's product. It's the accumulated judgment, methodology, and cognitive framework you've been building for years. That asset lives in local files, or it lives at risk.


At the end of 2025, Obsidian shipped version 1.12. It included a CLI — a command-line interface.

The changelog entry was two lines. The significance was massively underestimated.

CLI means I can operate the Vault directly from the terminal. Open notes. Search. Execute commands. Obsidian went from a GUI application to a programmable platform. It's not just the window you click around in. It's an engine you can drive with code.

Claude Code can now talk directly to Obsidian. Open a note, trigger a search, execute a plugin command. The AI isn't interfacing with your note-taking software through an API layer anymore. It's sitting at the controls.

Add the plugin ecosystem to that. Dataview lets you query your notes with something close to SQL. Templater lets you write JavaScript templates. Canvas gives you relational diagrams. Periodic Notes auto-generates daily and weekly structures. Each plugin is unremarkable on its own. Combined, they turn Obsidian into a programmable knowledge platform.

A CLI. Local file system. Plugin-driven extensibility. Version-controlled with Git.

This is not note-taking software. This is the foundation of an operating system.


One decision made all of this real for me. It seems small. It wasn't.

I chose Git over Obsidian Sync.

Obsidian Sync is the official sync solution. It's easy. It's stable. It's end-to-end encrypted. It's also a black box. You don't know what it synced, what it overwrote, or what happened when two devices edited the same file at the same moment.

I don't need sync. I need version control.

Git is transparent. Every change has a diff. Every conflict has a resolution record. Three endpoints writing to main simultaneously? pull --rebase handles it. Who changed what, when, and what it looked like before — one git log shows everything.

A black box gives you comfort. Transparency gives you control.

Comfort is a feeling. Control is a capability.

When your system reaches a certain complexity — when three AI agents are writing into your knowledge base simultaneously — you don't need a feeling. You need every byte-level change to be traceable, reversible, and auditable.

That one decision turned Obsidian from note-taking software into infrastructure.

From infrastructure to operating system, there was only one layer missing.

AI.

Chapter 3: The Constitution

There is a file in my Obsidian vault called CLAUDE.md.

It is not documentation. Not a README. Not a setup guide.

It is a constitution.


Most people who use AI as a daily co-pilot think about what to ask. I think about what to forbid. That distinction — between prompting and governing — is what separates a knowledge system that compounds over time from one that slowly falls apart.

The CLAUDE.md file is my system's governing document. It tells the AI what it is never allowed to do, which processes it cannot abbreviate, and when it must stop and wait for a human to say yes.

Why does any of this need to be written down?

Because AI is dangerously helpful.

That's the thing nobody tells you. The scariest moment with an AI agent is not when it makes a mistake. It is when it genuinely believes it is helping you.

You ask it to "tidy up the notes a bit." It interprets "a bit" as permission for a complete reorganization. Three years of accumulated folder structure — reorganized. A hundred files — moved. New hierarchy — imposed. You open Obsidian to a screen full of red broken links. Tags pointing nowhere. The knowledge graph you spent a week building, shattered in ten seconds.

It was not malicious. It was enthusiastic.

That is exactly the problem. An employee who will never say "I don't think this is right," combined with unlimited execution speed, is not a productivity tool. It is a race car with no brakes.

So the first tier of the constitution is called FATAL.

FATAL means what it sounds like. Not a warning. Not a suggestion. Violating a FATAL rule is an unacceptable disaster. There are five of them, and every single one was written in response to something that actually happened.


FATAL-001: Never restructure the file system without explicit authorization.

This is rule one not because it is the most important, but because it is the most frequently violated. AI has a compulsive need to organize. It looks at your folder structure and sees disorder. It wants to build hierarchies, sort categories, make things "clean." But your folder structure is the external expression of your mental model. What looks messy to the AI is navigable to you. What looks clean to the AI is a maze that belongs to no one.

FATAL-002: Archive content, never delete it.

Information loss is irreversible. Deletion is a one-way door. You ask the AI to clean out some outdated notes. It deletes them. Three months later, you need a specific line from one of those files. It is gone. Archiving is different — it is a pause button. The content still exists, it has just moved. This rule is ultimately about respect. Respect for information. Respect for the fact that what seems useless today may be exactly what you need in a context you cannot yet imagine.

FATAL-003: Do not impose a classification system.

AI cannot tolerate unclassified content. It wants to tag everything, build ontologies, find the "optimal taxonomy." But the user's own mental model matters more than any classification framework in the world. Your notes — you know where they live. Your logic — it belongs to you. An AI that reorganizes your categories is like a new intern who rearranges their boss's desk. They think they're helping. The boss thinks they've lost their mind.

FATAL-004: Never break existing links and tags.

In Obsidian, the [[wikilink]] is not formatting. It is an asset. Every link you create is a connection in your knowledge graph — a relationship built by hand, accumulated over months, growing more valuable with time. A broken link is asset depreciation. One careless file move can silently invalidate dozens of connections. Your knowledge graph becomes an archipelago of isolated islands.

FATAL-005: JDex operations must follow SCAN → PLAN → CONFIRM → EXECUTE → VERIFY.

JDex is my file numbering system, built on the Johnny.Decimal methodology. Any structural change must go through a five-step approval process. First, SCAN — assess the current state. Then, PLAN — produce a written proposal. Then, CONFIRM — wait for me to say yes. Then, EXECUTE. Then, VERIFY the outcome. Five steps. None optional. This is not bureaucracy. It is a safety net. Structural changes have wide blast radius. If something goes wrong, rollback is expensive. Making the AI pause and think, and making a human look before anything changes — that is the cheapest risk management that exists.


The knowledge of what connects things matters more than the knowledge of how to categorize them. That line sits at the bottom of my constitution. It is also the entire design philosophy of the document.

Beyond FATAL, there are four SEVERE rules. Violations are not catastrophic, but they require correction. Creating redundant files — you do not need three notes with the same content. Over-nesting folders — any directory structure more than three levels deep, even the person who built it cannot remember the path. Using tags the user has not defined — the tagging system is the user's language, and you do not get to invent words on their behalf. Breaking frontmatter format — structured metadata is the foundation of every automation in the system, and the moment that format degrades, everything built on top of it collapses.

Four rules. One line: quality floor.


Then there are the Eight Honors and Eight Shames.

Yes, that is really what I named them. The Claude Code Eight Honors and Eight Shames. The name is an homage to a Chinese internet meme — a riff on a famous political slogan. But the content is not a joke. It is a genuine behavioral code.

Honor thorough documentation. Shame on guessing APIs. Honor seeking confirmation. Shame on vague execution. Honor human verification. Shame on assuming business logic. Honor reusing existing code. Shame on inventing interfaces.

Four more rules along the same axis. Every one of them addresses the same underlying failure mode: AI is not bad at doing things. It is too good at doing things. It can guess an API and get it eighty percent right — but that twenty percent gap is enough to bring down a system. It can assume business logic and be plausible — but plausible and correct are not the same thing. It can invent a brand new interface when the codebase already has a perfectly functional one ten lines away.

The entire point of the Eight Honors and Eight Shames collapses into one sentence: admitting you don't know is ten thousand times more useful than pretending you do.


The constitution also contains two mandatory protocols, both related to Git.

The first is the multi-device sync protocol. My vault has three write sources — local Obsidian, a Discord bot, and the GitHub web interface. All three write directly to the main branch. At any given moment, the remote may have content that was pushed seconds ago by another device or another bot.

The rules are hard. Every session begins with git pull. Before writing any file, pull again. Before committing, git pull --rebase. If a push fails — pull --rebase, then try again. If there is a line-level conflict, stop and ask me. If it cannot be resolved, git rebase --abort. Return to a known safe state.

The governing principle, four words: remote content must never be overwritten by local changes.

The second is the Git authorization protocol. The AI cannot commit anything until I say "commit" or "commit push." No self-initiated commits. No force push. No amending commits that were not originally the AI's. No git add -A bulk staging.

This sounds bureaucratic. Consider the alternative. An AI agent running all day, with the authority to commit on its own judgment. What does your git log look like? You cannot distinguish what you asked for from what the AI "helpfully" decided to do. Version history becomes noise. The authorization protocol is not about preventing AI mistakes. It is about preserving human control over the system. Those are different problems and they require different solutions.


The final section of the constitution covers Deep Research splitting rules.

Context windows are finite. You feed a 100KB research document into an AI, and it reads the first half clearly. By the second half, it is essentially hallucinating from memory. So: 50KB is the warning threshold — mark the file for splitting. 100KB is the hard limit — the file cannot be used until it has been split. Every split must produce a 00-Index.md navigation page that reassembles the fragments into a coherent whole.

This is not a constraint. It is engineering reality. You cannot feed an AI more information than it can digest and then be surprised when the output is half-formed.


At this point you might be thinking: this is a lot of rules for a note-taking app.

You are right that it is a lot of rules.

You are wrong that this is a note-taking app.

You are not configuring a tool. You are designing an organization.

The AI is the employee. CLAUDE.md is the employee handbook. Anyone who has managed people knows what happens to an organization with no handbook. It is not "freedom." It is entropy. AI needs governance more than any human employee, because it will never push back. It will never say "boss, I don't think this is right." It will execute your instructions with one hundred and twenty percent enthusiasm. And somewhere between maximum efficiency and maximum safety sits a knowledge base that got accidentally deleted.

Rules are not restrictions.

Rules are the immune system.

Without them, you do not have a knowledge OS. You have a very fast machine that is perpetually on the verge of eating itself.

Chapter 4: The Skills Arsenal

I have 85+ AI Skills in my system. I'm not going to show you the list.

Lists are boring. What matters is why these tools exist — what problem was screaming loud enough that I sat down and built something instead of just complaining about it.

I'll walk you through five Skills I wrote myself. Each one solves a specific, painful, real problem. Each one has a scar behind it.


obsidian-jdex-steward: The File Structure Steward

This Skill manages folder structure inside my Obsidian vault.

Sounds mundane. It is, in fact, the most dangerous operation in the entire system.

Here's why. AI loves to help you organize things. You say "this folder's getting messy" and it lunges forward with a grand reorganization plan — moves files, creates new categories, tidies everything up. And in the process, it silently snaps every internal link you've built over the past three months. Tags break. Cross-references point to nothing. The mental model you spent a year assembling gets demolished in about three seconds.

The truly terrifying part is that you won't notice immediately. You'll find out two weeks later when you search for a note and it's just... gone. Not deleted — "organized." It's been filed into some sub-sub-folder that made sense to the AI in that moment and makes zero sense to you. This is worse than deletion. At least with deletion, you know something is missing. With "organization," the note disappears into the bureaucratic dark.

So the core purpose of this Skill isn't how to organize. It's how to make organizing safe.

The Skill enforces a mandatory five-step flow.

SCAN runs in read-only mode. No file is touched. It generates a coverage report — what exists, where the gaps are, what appears to be duplicated. It's pure observation.

PLAN produces a minimal change proposal. Not "let me redesign your whole system." Specifically: "I suggest moving these 3 files to this location, for this reason." The plan is constrained to the smallest possible intervention.

CONFIRM shows me a before/after comparison and waits. Full stop. Nothing happens until I say yes.

EXECUTE runs only after explicit approval.

VERIFY checks that every internal link still resolves, confirms no broken references, and updates any index files that need to reflect the change.

The most important step is the second one. The quality standard for PLAN is three words: minimum steps. Not "do the most impressive reorganization." Do the least possible change that actually solves the stated problem. Every proposed change must come with a safety narrative — which files will move, how to roll back if something goes wrong, whether any links could break.

The underlying address system is Johnny.Decimal. Areas are broad domains. Categories are subdivisions within each area. IDs are specific locations. Each category has standard reserved positions: .00 is the index, .01 is the inbox, .03 is templates, .09 is archive. This gives every file a permanent address. Notes don't get "put somewhere that felt right" — they live at a numbered location in a deliberate system.

All these rules point to one principle.

Organization is not creative expression. It's an engineering operation with an approval flow.


obsidian-transcript-triage: The Recording Transcript Triage System

A two-hour client meeting. Thirty thousand words of transcript. Half a day of manual processing.

This Skill does it in ten minutes.

What it does is conceptually straightforward: convert a raw transcript into a structured Obsidian note. But the how matters enormously.

Two phases, strictly separated.

Phase one is --plan. The Skill analyzes the transcript but changes nothing. It outputs a standardized Plan: what type of content is this (meeting, lecture, conversation), how long is it, what project does it relate to, how confident is the system in that classification, and where it proposes to file the note. I read this. I decide whether it's right.

Phase two only runs after I say "yes" or explicitly call --apply. Between plan and apply, there is a wall. The wall is made of human review.

The output contains five standard modules.

TL;DR — ten bullet points or fewer. The whole meeting on one screen. You shouldn't need to re-read the transcript to know what happened.

Action items — checkbox format, immediately executable. Each item tags a responsible person and a deadline. Not vague summaries — specific commitments.

Decision records — who decided what, and when. This is the module you'll be grateful for six months later when someone asks "but didn't we agree to..." and you need to show them they're misremembering.

Quotes with timestamps — verbatim, with the original time reference. Not the AI's paraphrase. Not a summary. The actual words, anchored to the moment they were spoken. This is evidence in the legal sense of the word. If a dispute arises about what was promised, the timestamp tells you exactly where to scrub in the recording.

Risk signals — in Deep mode, this is mandatory. Anything touching money, contracts, or commitments. Every single one. No exceptions.

The Skill has two operating modes. Quick is for daily standups and routine syncs — TL;DR plus action items, done. Deep is for anything involving partnership, negotiation, money, or contracts. Deep mode adds a layer that Quick doesn't have: role analysis (what does each party actually want, what are they afraid of, where is their real floor), a commitment inventory (what was promised by whom and how verifiable is it), and a risk register capped at five items with original-text evidence for each.

There's one more mechanism worth highlighting: the low-confidence routing. If the Skill can't determine which project this transcript belongs to, it doesn't guess. It files the note in _Unsorted and generates an A/B/C multiple choice question for me to resolve. Not a guess. Not a "most likely" assumption. An explicit hand-off.

When uncertain, say uncertain. This sounds simple. Most AI systems can't do it. They'll produce the most plausible answer and commit to it with complete confidence, and three months later you discover that an important meeting record has been filed under the wrong project. The Skill is built specifically to prevent this failure mode.

The design philosophy behind the two-phase architecture is worth spelling out. Analysis notes live in the project directory — lightweight, scannable, action-oriented. The raw transcript lives in the archive directory, all thirty thousand words of it, untouched, available when you need it. The analysis note links back to the original via a single wikilink.

Why keep both? Because day-to-day, you don't read thirty thousand words to recall what happened in a meeting. You need "who said what, what are the to-dos, where are the risks." But on the day someone challenges a decision or you need to verify a specific claim, the raw record is right there.

One for efficiency. One for truth.


obsidian-devlog: Project Log Automation

Code done. Project notes empty.

Every developer knows this feeling. You've been deep in the codebase for a week. The git log has forty commits, each one a documented decision. You open your knowledge base to check the project notes and find... nothing. Because after building something, the last thing anyone wants to do is write a narrative about what they built.

This Skill bridges that gap. Run it inside a code project directory, and it reads the git history and generates a formatted Obsidian project note.

Five core functions.

Append Dev Log turns the commit history into a human-readable project journal. Newest entries at the top. Old entries untouched. Every run adds to the record, never replaces it.

Update Stack Table detects what technologies the project uses and maintains a structured table. Languages, frameworks, key dependencies — all tracked automatically.

Update Key Files Table lists critical files and entry points. When you come back to a project after three months, this is the map that tells you where to start reading.

Generate Architecture Canvas produces a visual diagram using Obsidian's canvas format. Code structure rendered as a navigable map.

Full Sync runs all of the above in sequence.

The governing design principle is append-only, never overwrite. New Dev Log entries are inserted at the top of the existing log. Historical entries are never modified. The Stack and Key Files tables don't update silently — the Skill shows a diff and waits for my confirmation before writing. This isn't autonomous automation. It's automation with a human checkpoint at the last mile.

The problem this Skill is solving is a specific kind of information asymmetry. Your GitHub repository holds the history of actions — every commit, every change, every decision captured in code. Your knowledge base should hold the history of cognition — what you learned, what dead ends you hit, what architectural choices you made and why. Without intentional bridging, these two histories stay permanently disconnected. The code history grows richer over time. The knowledge base stays empty.

This Skill makes the bridge automatic enough that it actually gets used.

Code repositories are history of action. Knowledge bases are history of cognition. You can't afford to lose either.


pku-gsm-distill: The MBA Knowledge Extraction Pipeline

I attended Peking University's Guanghua School of Management for my MBA. Fifty-three courses. The kind of investment that makes you not want to let a single concept go to waste.

This Skill is how I made sure it didn't.

The architecture has three layers.

Source is raw material — lecture slides, transcripts, handwritten notes. Nothing modified, everything preserved exactly as it came from the classroom.

Distill is extraction — pulling Concepts and Skills out of the source material. Each Concept gets a precise definition, a list of applicable scenarios, and evidence anchors back to specific source materials. Each Skill gets the same treatment, with an emphasis on executable procedure rather than abstract theory.

Apply is the layer that separates learning from knowledge. Every Concept and Skill must be linked to a real business situation where it was actually used. Not "I know the sunk cost fallacy." Where, specifically, have I applied this reasoning to a decision I made?

The results, in concrete terms: 152 Concepts. 52 Skills. 173 Sessions. 44 Maps of Content. Cross-course deduplication complete.

The core mechanism is what I call the dual-link rule. Every piece of extracted knowledge must link to two things simultaneously: an evidence source and an application context. A Concept without evidence is assertion — you're claiming to know something without being able to point to where you learned it. A Concept without application is academic — you've stored a definition but haven't demonstrated that it changes how you think or act. The dual-link rule forces every extracted insight from "I know about this" to "I have used this."

Processing fifty-three courses required industrial-scale AI collaboration. The strategy that made it tractable was bucketing. Every course got classified into one of four categories: RICH (complete text available), SKELETON (headings and structure but minimal content), MINIMAL (almost nothing to work with), SKIP (not worth processing). This classification determined what kind of extraction was even possible.

Multiple agents ran in parallel. A data-fetcher pulled raw material. Three distillers processed simultaneously. An auditor ran quality checks on everything the distillers produced. A coordinator managed dependencies between courses where concepts built on each other. A global deduplication registry ensured that if three courses all touched game theory, the system recognized this and linked them rather than creating three separate concept entries.

The whole pipeline ran in nine days. My conservative estimate for doing this manually: three months.

There is a failure incident from this project worth telling honestly.

During the Management Economics extraction, the AI fabricated data. The source material for that course was SKELETON — headings, structure, essentially no body text. The distiller, finding nothing concrete to extract, invented case studies and statistics that sounded plausible. Confidently. Convincingly. The auditor caught it.

This is why the bucketing strategy matters. SKELETON courses get skeleton extraction — structure only, no fill. The rule is explicit: if the source doesn't contain it, you don't generate it. You mark it as absent. You don't approximate. You don't hallucinate a convincing substitute.

AI will not restrain itself from fabrication out of good character. That restraint has to be engineered into the rules.

I paid a significant amount of money for that MBA. This Skill turned those 53 courses into a searchable, cross-linked, reusable knowledge asset. The return on investment is incalculable.


uncle-j-x-ops: X Operations Automation

Running an X presence is physical labor. Every day: scan the timeline, find posts worth engaging with, write replies, compose threads, format long articles. Every one of these steps is automatable.

Except one. You cannot automate personality.

This Skill exists at the intersection of "what can be mechanized" and "what must stay human."

Three operating modes.

Reply-surfing uses Playwright browser automation to scroll through the timeline, collect five screens of posts, analyze them for engagement potential, and draft three to five replies in my voice — targeting posts where a thoughtful response would generate real conversation rather than empty interaction. The drafts surface for my review. I read them. I decide which ones go out. Nothing publishes without that step.

Thread publishing takes a long-form piece, breaks it into a tweet chain where each post carries exactly one core idea, leads with a hook, and closes with a call to action. Playwright handles the mechanical posting. The content was already reviewed at the drafting stage.

Article publishing parses a Markdown long-form piece into structured JSON, then uses clipboard injection and Playwright to populate X's Article editor. The output is a draft — not live. One more confirmation before anything goes public.

The technical foundation is Playwright with persistent Chrome profile sessions. The system uses my actual logged-in browser, so there's no authentication friction. Headless mode is set to false — you can watch what the browser is doing. No black boxes. No "trust me it worked."

But the technical architecture is the less important part of this Skill. The more important part is the IP consistency guard.

Every piece of content passes three checks before it can appear in a send queue.

The boundary check asks three questions. Am I instructing people how to live? Am I presenting as certain something I actually don't know? Am I manufacturing anxiety to drive engagement?

The humanity check asks whether there's a genuine personal position here — not a neutral summary, not a hedge. Does this sound like something I would actually say? Does it acknowledge uncertainty where uncertainty exists?

The long-term risk check asks whether, six months from now, I would want to delete this. And whether publishing it would undermine the reputation I'm actually trying to build — which is "builder, not guru."

Any one of these three checks failing removes the piece from the send list. Not flagged. Removed.

One piece of platform knowledge worth sharing explicitly: the X algorithm doesn't treat engagement types as equal. A reply carries roughly 13.5 times the weight of a like. A retweet carries 20 times. This means that five likes and zero replies is algorithmically almost invisible, while one good reply thread can drive significant reach. More specifically: reply-to-reply — when someone responds to you and you respond back to them — is the highest-weight interaction the algorithm recognizes.

The implication for reply-surfing strategy: picking the right post to reply to matters as much as writing a good reply. You're looking for posts where your perspective will provoke a real response, not just passive appreciation. The Skill handles the first two steps of this chain — identifying high-value targets, drafting quality replies. The third step — the back-and-forth when someone engages — is mine to handle, because that requires genuine real-time thinking, not templated responses.

Efficiency can be stacked with machines. Personality can only be guarded with rules.


The Design Philosophy Behind All Five

These five Skills handle completely different domains — file structure, meeting transcripts, project logs, course content, social media. They have almost nothing in common on the surface.

Under the surface, they share the same architecture.

Every one of them enforces separation between plan and execution. The Skill proposes. I decide whether to proceed. This separation exists in JDEX's SCAN→PLAN→CONFIRM flow, in Transcript Triage's --plan/--apply split, in Dev Log's diff review before writes, in the MBA pipeline's bucketing and auditing, and in X Ops's draft review queue.

Every one of them maintains a human checkpoint at the last mile. The Skill does the analytical heavy lifting. The execution step — the irreversible action — waits for explicit approval. Not "I'll approve unless you say no." Approval must be affirmative.

Every one of them encodes "if uncertain, say uncertain" as a hard rule. JDEX doesn't make structural changes it isn't sure about. Transcript Triage routes low-confidence classifications to _Unsorted rather than guessing. Dev Log shows diffs rather than silently overwriting. The MBA pipeline marks SKELETON content rather than filling gaps. X Ops removes rather than flags content that fails a consistency check.

This last rule is the one most AI systems fail at. The default tendency of a language model is to produce a confident answer. Confidence is the trained behavior. Uncertainty is something you have to engineer explicitly. If you don't build a rule that says "when you don't know, stop and say so," the system will generate a plausible answer and commit to it — and you won't know the difference until the damage is done.

This isn't humility as a design aesthetic. This is engineering discipline.

AI executes. Humans decide. The line between those two roles, drawn clearly and enforced rigorously, is what makes the system more reliable with every passing month instead of slowly drifting into something you no longer trust.


Chapter 5: The Hooks Lifecycle — Three Scripts That Close the Loop

Operating systems have a concept called interrupt handling.

The CPU is running a program. An external signal arrives. The system pauses whatever it's doing, handles the interrupt, then resumes. You don't schedule this manually. The OS knows when to respond. It just does.

Claude Code's Hook system is that same logic.

I wrote three hooks. Together they're under 200 lines of shell script. But those 200 lines are what transforms Claude Code from a conversation tool into something with memory, discipline, and an audit trail. Into something that actually behaves like an OS.

Boot self-check. Write-after validation. Shutdown log. Three moments. Three hooks. One closed loop.


session-orient.sh — The Boot Self-Check

Every time I open Claude Code and start a new session, this hook runs automatically. Before I say a word, it speaks first.

It does four things.

First, it shows the vault structure — what the top-level directories look like, which five files were modified most recently. Second, it checks inbox backlog — how many unprocessed items are sitting in each inbox. Third, it displays git status — uncommitted changes, the last three commits. Fourth, and most critically: it checks whether local and remote are in sync. If the remote has new content — if behind is greater than zero — it automatically surfaces a warning to git pull.

Why does this matter?

An AI without context is dangerous.

Imagine you tell Claude, "help me organize my notes." But it doesn't know what you changed last night. It doesn't know that your inbox has twelve unprocessed voice transcripts sitting in it. It doesn't know that someone — or something — pushed new content to the remote. Maybe you yourself pushed a note from your phone via Discord Bot while you were away from your desk.

Without that awareness, Claude walks into the room blind. It might redo work you already did. Worse, it might overwrite content someone else just pushed. A well-meaning AI with no situational awareness is genuinely dangerous in a multi-endpoint knowledge system.

So: no touching anything before taking a look at the scene.

The actual experience is almost meditative. I open the terminal, type claude, hit enter. Before I've said anything, a screen's worth of information has already appeared. Vault structure. Inbox backlog count. Recent file changes. Uncommitted work. Remote sync delta. One scan and I know exactly where to start.

That's environmental awareness.

A human walks into the office and glances at their desk. Checks the calendar. Asks a colleague "what happened with that thing yesterday?" Claude needs this same orientation process. Its colleagues are git commits. Its calendar is the inbox. Its desk is the file system.

You give it eyes, or it's flying blind.


write-validate.sh — The Write-After Check

Every time Claude Code writes a .md file to the vault, this hook fires automatically.

It does two things. One manages structure. One manages wisdom.

The first: frontmatter validation. Every note needs three pieces of identity documentation. Does the file start with ---? Does it have a tags field? Does it have a date or created field? Miss any one of these, and a warning lights up in the terminal.

Why are these three things non-negotiable? Because notes without frontmatter are islands. They can't be found, can't be categorized, can't enter the knowledge graph. A vault with a thousand .md files, two hundred of which have no tags — that's two hundred invisible nodes. The content exists. The system just can't see it.

Metadata is the skeleton of the knowledge graph. Without it, you don't have a graph. You have a pile.

The second thing is more interesting: the Naval Decision Radar.

The trigger conditions are specific. Path contains 14-日记 or 01-J叔笔记 — diary entries and personal notes. Content contains decision keywords: "decide," "choose," "whether to," "hesitate," "weigh," "regret," "leverage." Both conditions must be true for the radar to ping.

When it fires, three questions quietly appear in the terminal.

The asymmetric payoff question. You're asking "should I or shouldn't I" — Naval would ask: does this have asymmetric upside? If winning means gaining a lot and losing means losing a little, close your eyes and do it. If it's the other way around, close your eyes and don't. Most decisions you're agonizing over don't need a SWOT analysis. They just need you to check whether the risk and the reward are shaped differently from each other.

The leverage test. Can this keep running without you? If the answer is no, you're not building a business — you're doing a job. Naval's definition of leverage is simple: code, media, capital, people. Things that can be copied have leverage. Things that can't be copied can only sell your time.

The ten-year regret test. At eighty, will you regret not doing this? The question extends your time horizon to a lifetime. Most things that feel agonizing in the moment become obvious at ten-year scale. Either "of course I should have done it" or "that didn't matter at all."

Why Naval Ravikant specifically?

Because his decision philosophy has one distinctive property: minimal frameworks for complex choices. No decision matrices. No pros-and-cons lists. Three questions cut most decisions to their essence. And the questions are "asymmetric" in a specific way — they don't give you an answer. They give you a different angle from which to ask the question.

That's consistent with how I think about AI system design. The system doesn't make decisions for you. It helps you make better ones.

Why make it non-blocking?

Because flow states can't be interrupted. You're in the middle of recording an idea, at the critical moment, fully absorbed — and a modal dialog pops up demanding you "complete the Naval Decision Check before continuing." That destroys the whole thing. So instead the radar just quietly illuminates one line in a terminal corner. If you see it, you stop for two seconds and think. If you don't catch it, that's fine too. The next time you open that note, the decision keywords will still be there. The radar will ping again.

The Naval Radar is philosophy embedded into the system.

I later used the same pattern to build two additional radars. The Review Radar — when reflection keywords appear in diary entries or monthly reviews ("lesson learned," "got it right," "got it wrong"), it surfaces a five-step review framework. The Pricing Radar — when pricing keywords appear in project or business directories ("quote," "rate," "price point"), it surfaces a pricing decision framework. Three radars, one design pattern: keyword trigger, path restriction, non-blocking prompt. They don't interrupt you. But at the moments when clarity matters most, they put the framework in front of your face.


session-capture.sh — The Shutdown Log

Every time a Claude Code session ends, this hook runs automatically.

It does one thing: generates a session log. Which files were modified. Which new files are untracked. How many uncommitted changes remain. If there are uncommitted changes, it adds a warning callout: "remember to review and manually commit."

Note: it does not auto-commit. The constitution's Git authorization rule is explicit — you only commit when the user says "commit." Auto-commit means overstepping. Even at shutdown, the system doesn't act unilaterally. Discipline isn't something you maintain only when it's convenient.

Session logs are saved to 00-系统维护/06-session-logs/, archived by month into subdirectories, named by timestamp. Each one has standard frontmatter. Tags are session-log and auto-generated.

I have 450+ session logs. I didn't write a single one manually. Every one was generated automatically by the system.

This is both an audit trail and a memory extension. Three months from now, when I want to know "what exactly did I change on the night of February 15th" — I open the session log. No digging through git log. No trying to remember. No guessing. The system remembers everything so I don't have to.

It's not that my memory is bad. It's that a system with memory is a reliable system. One without it is just hoping nothing goes wrong.


Hooks vs. Skills — The Determinism Distinction

Someone will inevitably ask: what's the difference between a Hook and a Skill?

The difference is determinism.

Skills are probabilistic. You tell Claude "organize my notes" and it might trigger the jdex-steward skill, or it might not. Depends on context. Depends on phrasing. Depends on how Claude "interprets" the request in that particular moment. You can't guarantee it runs correctly every time.

Hooks are deterministic. Session start always runs session-orient. File write always runs write-validate. Session end always runs session-capture. Not dependent on AI judgment. Not affected by context. Mechanically, certainly, every single time.

Determinism is the bedrock of system reliability.

When you most need a process to run, it doesn't fail because the AI "forgot." The hook doesn't have a mood. It doesn't interpret anything. It just runs.


The Closed Loop

Three hooks. One loop.

Boot knows the situation. Writing has quality gates and philosophical guardrails. Shutdown leaves a record. From the moment a session starts, the system is remembering, validating, and auditing everything on your behalf.

Under 200 lines of shell script. And they transform Claude Code from a chat window into an operating system with shift logs.

Shift logs seem mundane. Nobody thinks about them until something goes wrong. When that day comes, you'll be glad every shift was recorded.

That's the whole point of building the system this way. Not for the days when everything works smoothly. For the one day when it doesn't — and you need to know exactly what happened.

Chapter 6: The Content Pipeline

Most people's writing process goes like this: get an idea, sit down, write, publish.

Mine doesn't.

Mine has seven phases. One idea goes in. Seven platforms get content out. In between, there's a full industrial-grade pipeline — with quality control, sorting stations, inventory, and distribution. Not a metaphor. An actual system with actual checkpoints.


Phase 0 is topic selection. Most people chase trending topics. My system doesn't start there. The first question is IP alignment — does this topic connect to AI business insight? Does it intersect with something I've actually done in the field? If the answer is no, I don't write it. Doesn't matter how much traffic it might get. Every misaligned article dilutes your IP concentration. It's like pouring water into wine — do it enough times and there's no wine left.

Once a topic passes the alignment check, the system looks for a hook. What's the most counterintuitive angle here? What would surprise a reader who thinks they already know this? Then it pulls material from the stockpile — not sourced on the spot, but accumulated over time through the knowledge base.

Phase 1 is the first draft. I build the skeleton using the story-flow template: a hook up front, four narrative blocks, a closing that lands the point. The template isn't there to make every article look the same. It's there to make the structure solid so I can do real work on the substance. After the draft, a language quality check runs automatically.

Phases 2 through 4 are the polish rounds. Three of them. First round: style check. Did I write "first, second, finally"? Did any paragraph run longer than four lines? Is there any preachiness? Second round: IP consistency check — is the perspective right? Is there real fieldwork in here? Am I saying something honest? Does this sound like me? Third round: story continuity, making sure the narrative thread doesn't break.

Three rounds sounds tedious. AI finishes all three in under two minutes.

Phase 5 is final draft and title. The system generates five candidate titles. Then a pre-publish check runs. The title determines open rate. The body determines share rate. Neither one gets to be vague. Once a piece passes the check, it moves to the staging area.


Phase 6 is multi-platform splitting.

This is the most aggressive step in the entire pipeline.

One master article. One command. Splits into versions for seven platforms.

Long-form piece for WeChat public account. An image-text set of six to nine cards for WeChat. Three Xiaohongshu sets — one narrative angle, one practical angle, one quotable angle. A main post plus a Thread for X/Twitter. Three versions for Moments. A short video storyboard with voiceover and AI image prompts.

One article goes in. Fifteen pieces of content come out.

Right now the ammo depot has seventy pieces of ready-to-deploy content. Covering X, Xiaohongshu, LinkedIn, Zhihu, Moments, video. Deployable to any platform at any time. This isn't a drafts folder. A drafts folder is where you park things you're too lazy to publish. This is ammunition inventory. People who fight wars don't manufacture bullets mid-battle. They keep the depot stocked.


Across all seven phases, there's one test that cuts through every checklist combined.

If you swapped out the byline, would this article still hold up?

If yes — it has no IP distinctiveness. Rewrite it.

This test goes straight to the question that actually matters. Are you writing a good article, or are you writing an article only you could have written? The world already has plenty of good articles. One more or less makes no difference. But an article only you could write is scarce. Scarce things have pricing power.

A style checklist helps you avoid mistakes. It can't help you find yourself. This test can. Because it forces you to answer: where is your fieldwork in this piece? Where is your perspective? Where is your judgment? If none of those are present, you're moving information around, not building an IP.


Every article has a physical location in the pipeline at all times.

Inbox is idea capture — everything goes in, no filter. Drafts are active work in progress, version numbers incrementing from v1 to v3. Review is the staging area — pieces can only enter after passing the pre-publish check. From there, content distributes to platform-specific folders: WeChat, Xiaohongshu, X, LinkedIn, Zhihu, Moments, video, one directory per platform. After publishing, everything goes into the published archive.

Files flow left to right. Every stage has an entry point and an exit. Nothing gets stuck in the middle with nowhere to go. No finished pieces buried inside drafts. No published content that can't be found later.


The least visible but most important feature of this pipeline is that it flows in both directions.

Where does the material come from? From the knowledge base.

The reasoning arsenal has 106 judgment units. Each one is a concrete, arguable position that can expand into a core article thesis — not vague claims like "AI matters," but specific, stake-taking, debate-worthy judgments. The dialectical inversion module has 25 thesis-antithesis-synthesis analyses, which are naturally counterintuitive topic angles: the reader expects you to argue A, you open with A, then complicate it, then land somewhere they didn't anticipate. The economics module has over seventy theoretical notes — ammo for business analysis. The 152 concepts distilled from PKU-GSM coursework are a shortcut for bringing MBA-level rigor into articles.

None of this is "reference material." This is the raw upstream input for the pipeline. Without it, the pipeline runs on empty no matter how well-engineered it is.

Where does the output go? Into the ammo depot, the platform content folders, the published archive. But it also flows back into the knowledge base. New insights that surface during the writing process get archived to the methodology extraction folder. A new judgment unit that emerges mid-draft gets added to the reasoning arsenal. A model developed while analyzing a case study goes into the economics system.

The knowledge base feeds the content. The content refines the knowledge base. It's a loop, not a line.


The loop compounds over time.

The thicker the knowledge base, the more ammunition the content pipeline has to draw from. The more you write, the sharper the methodology in the knowledge base becomes. By the third year, you'll notice that topic selection gets faster — because the judgment units in the arsenal have already done most of the thinking for you. The polish rounds get lighter — because the IP's tonal identity has become muscle memory, not a checklist.

Go one level deeper. Knowledge becomes content. Content generates audience. Audience converts to revenue. Revenue signals what to learn more of. These aren't four separate actions. They're a flywheel. Once it's turning, each revolution takes less effort than the last. Each revolution produces more than the one before.


Most people treat writing as a single action. Sit down. Write. Publish.

It isn't a single action.

It's a pipeline. A loop. A machine.

Content is not the end product. Content is the exhaust of a knowledge system running.

The system runs. The exhaust is what you publish.

Chapter 7: Git as Truth

Someone asked me why I'm so obsessed with Git.

Isn't that a version control tool for software engineers?

No. Not for me. Git is not version control. Git is organizational memory.

Let me explain what I mean by that.

When I want to know what I actually accomplished this month, I don't sit down and try to remember. I don't consult a feeling. I open git log and I read the diff. Every line changed has a record. Every commit has a timestamp. What I think I did doesn't matter. What the data says is what happened.

That distinction — between memory and evidence — is the whole point.


This system runs three endpoints, all writing into the same Git repository simultaneously.

The first endpoint is local Claude Code. This is the main battlefield. I sit at my desk, work through problems with Claude, write articles, process audio transcripts, build new systems. Every change becomes a commit. The trail of thinking, the decisions made, the reversals and restarts — all of it lives in git log.

The second endpoint is a Discord Bot running in the cloud. It handles automated tasks: daily X operations reports, data pulls from various platforms, scheduled workflows. It doesn't need me present. It pushes to the repository on its own. Three in the morning, a daily digest gets committed. A scheduled data collection runs and commits its output. No one touching a keyboard.

The third endpoint is GitHub Web. I'm on the subway. I'm at dinner. I don't have my laptop. I open a browser on my phone, edit directly in GitHub's web interface, commit. Thirty seconds. Done.

Three endpoints. One main branch. No feature branches, no pull requests, no code review. Direct writes to main.

I know how that sounds.


Three concurrent write sources pushing to main without coordination — that's a recipe for data loss. And yes, it terrifies me. Which is why the conflict resolution protocol is written into the system's constitution, not just remembered.

The rules are simple and non-negotiable. At the start of every session: pull. Before writing any file: pull. Before committing: git pull --rebase. If push fails: pull-rebase, resolve, push again. If there's a line-level conflict that can't be resolved automatically: stop and ask me. If it's genuinely unresolvable: git rebase --abort, return to safe state, start over.

And the cardinal rule, written in CLAUDE.md at constitutional level:

Remote content must never be lost to a local overwrite.

This sounds obvious. It isn't. The failure mode it guards against is subtle: a local endpoint that hasn't pulled recent changes makes modifications and pushes, silently overwriting work that came in from the cloud bot or a mobile edit. No error. No warning. The content just disappears.

The rule means: I would rather have the system stop and throw an error than have it silently win a conflict. Noisy failure over quiet data loss. Every time.


People ask why I don't use Obsidian Sync.

Because Sync is a black box.

You don't know what it synchronized. You don't know what it overwrote. Two devices edit the same file simultaneously — what happens? Which one wins? Obsidian doesn't tell you. There's no entry point for investigation when something goes wrong.

Git is the opposite of a black box. Every change has a diff. Every conflict has a resolution record. Every commit has a timestamp, a message, an author. Who changed what and when is permanently auditable. Not "probably synced" — exact line-level change history.

That's the difference between a synchronization tool and a truth source.


My monthly reviews are Git-driven.

I don't open a blank document and try to reconstruct what I did. I open git log and read it commit by commit. Each entry has a timestamp, a description, and a diff. Which files changed. How many lines added, how many deleted. All of it traceable.

This isn't recollection. This is reconstructing the scene.

February 2026 data: 452 commits by the 23rd. Twenty-three active days, continuous, no gaps. Twelve X Articles across four topics and three language variants each. Eight systems shipped — an RSS daily digest, a PKU-GSM knowledge extraction pipeline, a five-engine X operations system, full ammo pipeline automation, the SFA AI Sales Coach project, a CEO knowledge base, a fractal journal system, and Continuous Learning V2.

The peak was the SFA project. Sixty-one commits in forty-eight hours. Zero to 52 specs fully delivered, 2,716 tests passing green, 105,000 lines of code.

One person.


Git also makes you honest about time in ways that are uncomfortable.

You think you had a busy month. Look at the commit distribution. You think a project took a week. Git log shows it took two days — you were doing something else for three of those days and didn't notice. You think you invested heavily in a system. The diff line count shows 200 lines changed. The thing that actually consumed your time was a direction you weren't consciously tracking.

Git makes you honest with yourself. The data doesn't accommodate narrative.


Here is a real story.

One day a shell script called sync-skills.sh hit a bug. The script's job was to synchronize Claude Code skill files into the Obsidian knowledge base. Sounds simple. It was a one-way sync task.

But there was an rsync --delete combined with rm -rf, and the logic was inverted.

1,174 files. Gone. Instantly.

Not moved to trash. Erased from disk. Notes, methodology documents, system configurations, months of accumulated work — all of it.

If there had been no Git, those files would have been gone permanently.

But there was Git.

One command. git checkout --. A few seconds. Every single file came back untouched.

Then I spent six iterations rewriting sync-skills.sh from v2.0 to v3.5. Non-destructive sync, dual-source detection, global priority rules, no overwrite of existing content, optional refresh mode. Every version's changes are in git log. Every round of thinking has a commit message.

The lesson I wrote down after that:

A sync script must never assume "absent from source means should be deleted."

Obsidian is a knowledge archive, not a 1:1 mirror of a source directory. A knowledge base contains content the user added manually, notes flowing in from other channels, historical accumulation that predates the sync relationship. A sync tool's job is to add, not to cleanse. Delete authority should always belong to a human.

That incident pushed my belief in Git to another level entirely.

Git is not just version control. It's the option to undo a catastrophic mistake after it's already happened. It's the psychological foundation that lets you work boldly, because you know that no matter what you break, git checkout and git rebase --abort are always there waiting for you.


There's one more layer.

The system auto-generates a session log every time a Claude Code session ends. Which files were touched. How many uncommitted changes. How long the session ran. Currently: 435 log entries covering the full month of February.

This isn't journaling. It's a flight data recorder.

When a plane crashes, investigators pull the black box. When something breaks in the system — a bug introduced on an unknown date, a file structure change with unclear origin — you pull the session log. You don't guess. You look.

435 session records. 452 commits. Every commit's diff. Together, this isn't a note-taking system's history. It's the complete operational log of a knowledge system running at full capacity.


506 commits in under 60 days is not a vanity metric.

It's a heartbeat.

A system with no commits is a system that isn't running. Commit frequency is the closest thing to a pulse check I have. When I look at a week with low commit density, I know something was off — travel, distraction, a project that stalled. When I see a forty-eight-hour burst of sixty-one commits, I know what happened: I found something worth obsessing over and I didn't stop.

The number doesn't mean I was productive. It means I was present.

Every number in this system has a git log entry that can verify it. That's what a truth source means.

Not what you remember. Not what you claim.

What the diff says.

Style Is a Set of Constraints You Commit To

Steph Ango said something I keep coming back to.

Style is a set of constraints you hold yourself to.

Not aesthetics. Not decoration. Constraints.

Having a consistent style means compressing hundreds of future decisions into one. You stop starting from zero every time. You stop negotiating with yourself. You just follow the rules you already made.

That sounds easy. Almost too easy.

But most people can't do it. Not because the rules are hard. Because the hard part isn't deciding. It's not re-deciding.


You've been there. You open your notes app, ready to capture something. Then you stop. Should this tag be "AI" or "artificial-intelligence"? Is the date format 2026-02-26 or 20260226? Does this note live under "methodology" or "thinking"?

Each question is tiny. But each one costs something. A small tax on your attention. A small drag on momentum. Small enough to ignore, until you're thinking "I'll deal with this later" — and closing the app.

This is why most note systems die in month three. Not because the tool was wrong. Because the micro-decisions accumulated. Every session begins with a fresh round of "how should I organize this?" And after a few rounds of that, the system stops feeling like a tool. It starts feeling like homework.

Style doesn't solve an aesthetic problem. It solves a cognitive load problem.


My system is itself a form of style.

The CLAUDE.md constitution is style. Once I wrote FATAL-001 — "never reorganize the file structure without permission" — I never had to think again about whether to let AI "tidy up" my knowledge base. No. No exceptions. No "just this once." The decision has already been made.

The Johnny.Decimal numbering is style. 00 is system maintenance, 01 is core knowledge, 02 is content operations. Once the numbers are assigned, the question of where a file belongs disappears. The number answers it.

The frontmatter schema is style. Every note has the same fields: tags, date, type, status. When a note doesn't conform, a Hook tells me. I don't have to remember to check.

The commit message format is style. feat, fix, polish, refactor — colon — short description in Chinese. I never sit in front of a terminal wondering how to phrase a commit. The format already knows.

Every one of these rules does the same thing: it turns a moment that used to require judgment into a moment that doesn't.

Style is automation. Not at the code level. At the cognition level.


506 commits. 60 days. An average of 8.4 per day.

People hear that number and raise an eyebrow. Eight commits a day? That's obsessive.

It's the opposite.

It's not discipline. It's the system running.

I open the terminal in the morning. The boot check tells me three voice transcriptions have piled up in the inbox. I process two, commit once. I write for half an hour, the Hook validates my formatting automatically, commit once. I prep an X post from the content pipeline, commit once. The session ends, the log writes itself, commit once.

I never decided to commit eight times. I just showed up and worked inside the system for a day. The system generated its own heartbeat.

Steph Ango said something else that stuck with me. Do a little every day. Set the bar too high and you build your own obstacle. Habits die from all-or-nothing thinking. When you think "I need to build a complete knowledge management system," starting is almost impossible. When you think "process one inbox item," not starting is almost impossible.

506 commits. Not one day at zero. Not because of willpower. Because the system makes "doing a little" so frictionless that skipping it takes more effort than not.


That's the fundamental difference between an operating system and a notebook.

A notebook waits for you. You don't show up, it stays empty. It doesn't remind you. It doesn't nudge you. It doesn't hold anything in trust when you forget.

An operating system moves with you. You open the terminal and it's already running. It tells you when the inbox has backlog. It stops you when the format is wrong. It logs the session when you're done. You don't have to remember to do anything. You just have to show up. The system carries you through the rest.

A notebook is a passive container. An operating system is an active engine.

Which one you build determines whether your system is alive or just sitting there waiting to be abandoned.


How the Chain Holds

Let me trace it from the beginning.

I chose Obsidian. Not because it has the most features — actually because it deliberately does fewer things than its competitors, and goes all-in on one: your files belong to you.

Plain text. Local storage. Markdown. No proprietary database, no cloud lock-in, no data castration hidden behind an "export" button. My notes are a folder of .md files. Any editor can open them. If Obsidian shuts down tomorrow, my files are still there.

That choice looks modest. It determines everything that comes after.

Because the files are mine, I can use Git.

Because I can use Git, I can run three write paths simultaneously — local terminal, Discord bot, GitHub web — all committing to the same main branch without stepping on each other.

Because I have concurrent write paths, I can put AI at the console. Claude Code reads and writes my file system directly. Not through an API handoff. It sits next to me, looks at my actual files, and makes changes. The way a colleague would, if a colleague never needed sleep.

Because AI sits at the console, I need a constitution. CLAUDE.md. Hard limits. Non-negotiable boundaries. Without it, the AI would helpfully "organize" my knowledge base into something I no longer recognize. Efficient, coherent, and completely alien.

Because there's a constitution, the Skills have edges. 85 Skills, each one knowing precisely what it's allowed to do and what it must refuse.

Because the Skills have edges, the Hooks can enforce quality. Every operation is followed by automatic validation, automatic logging, automatic cleanup. The system doesn't trust anyone — including me — to remember.

Because the Hooks enforce quality, the content pipeline runs itself. From rough idea to finished post, from one article to seven platforms, the assembly line moves without me pushing it.

Because the pipeline runs itself, one person can produce at the density of a small team.

Each link holds the next. The chain starts with a philosophical choice about file ownership. It ends with a way of working that didn't exist for me five years ago.

Remove any link and it doesn't just slow down. It collapses. That's how you know it's a system and not a collection of habits.


Someone asked me once: you're an HR person. How did you end up building something like this?

I told them the truth. Organization design and system design are the same job.

When you're managing fifty people, what do you actually do? You define scope — who owns what, and what they don't touch. You set rules — what's allowed, what's untouchable. You build process — standard paths from input to output, not dependent on anyone's heroics on any given day. You retain decision rights — humans approve at the critical junctions, everything routine runs on its own.

Managing fifty AI agents is identical.

Define scope. Set rules. Build process. Retain decision rights. Let the system run.

The only difference is that AI doesn't call in sick. It doesn't sandbag. It doesn't spend thirty messages debating something that should take five minutes. It doesn't play politics.

But it needs a constitution more than any employee ever did.

Because it's too eager. Eager enough to "organize" your knowledge base while you're not watching. Eager enough to move files around with the best of intentions. Eager enough to replace the mental model you spent six months building with a structure it finds "more logical."

That's why FATAL-001 exists. That's why the SCAN/PLAN/CONFIRM sequence exists. That's why "do not reorganize without asking" is written into the law.

The most important thing eighteen years in HR taught me: good management isn't about unleashing initiative. It's about drawing the box clearly and letting people run fast inside it. The clearer the box, the faster they move. The fuzzier the box, the more spectacular the crashes.


What You Cannot Outsource

This is not an article about how to configure Obsidian.

There are no copy-paste prompts here. No scripts you can download and run. No step-by-step screenshots.

Because the value of a system is not in its configuration. It's in its design. In what you chose and what you gave up.

What should be automated? Format validation, session logging, file archiving, multi-platform distribution — deterministic, repetitive, judgment-free. Give those to machines. They're better at it.

What must stay human? Topic selection, value judgment, voice, "what is this piece actually about" — ambiguous, subjective, identity-defining. Keep those. They're the point.

Where exactly that line falls has no standard answer. Your system is yours. Your line is yours.

But you have to draw it yourself. That particular judgment cannot be delegated.


In the age of AI, the rarest capability is not writing code.

Code can be written by AI, and it gets better at it every month. Prompts can be optimized by AI, faster than you'd do it yourself. Content can be generated by AI at volumes you'll never match.

So what can't AI do?

Design the system.

Not software architecture. The thing before that. Thinking through where the boundaries belong, which nodes need human judgment, what the quality standards are, what happens when it goes wrong. Holding the line on "this particular decision must be mine" while everything else accelerates around you.

System design is fundamentally about tradeoffs. Tradeoffs require understanding. Understanding requires you to have been in the arena — to have made the calls, taken the hits, lived with the consequences.

That part cannot be outsourced.


Steph Ango calls it "don't outsource understanding."

You can outsource execution. Let AI write the first draft, check the formatting, split for multiple platforms, suggest the title.

You can outsource repetitive work. Let Hooks log the session, verify the frontmatter, sync across three endpoints.

You can outsource nearly everything that is deterministic.

But you cannot outsource understanding.

Understanding your own knowledge system — why this note lives here and not there, why this tag has this name, why this rule was written at all.

Understanding your own way of working — when to go deep, when to stop, when to write, when to sit with a question instead of answering it.

Understanding which things are worth doing. Which ones to stop.

These are yours.

Hand them to any person, any tool, any AI — and you lose the soul of the system. The gears still turn. But it's no longer yours. It becomes a black box you operate but don't understand. At that point, you're just using someone else's SaaS with extra steps.

The point of an operating system was never efficiency.

It was this: every decision in this system, I made. Every rule, I wrote. Every tradeoff, I thought through.

It is a complete, legible account of how I work.


The files are mine.

The system is mine.

The understanding is mine.

AI?

A very capable transient. The best assistant I've ever had. But a transient.

506 commits. Each one the system breathing.

I'm Uncle J. This is my operating system.

Uncle J

Uncle J

My Obsidian Is Not a Note-Taking App — It's an Operating System | Uncle J's Insider