I can't write a single line of code.
4 days. 52 modules. 2,716 tests passing. 105,000 lines of code. No outsourcing. No team. Just me and a swarm of agents.
People ask: you came from HR. How did you pull that off?
I say: I was just doing HR work.
They think I'm joking.
The Setup
- I took over as HR Director for AB InBev's Central China division. Six breweries, 8,000 people, 21 HR staff scattered across Hubei and Hunan. Everyone doing things their own way.
Nobody cooperates by default. The first thing you do isn't send a memo or call a meeting. It's take a fuzzy objective — "integration" — and break it down until every person knows exactly what they're supposed to do, what done looks like, and who reviews it when something goes wrong.
I didn't have a name for this at the time. Now I do: System Design.
Last year, I decided to build an AI sales coach platform for FMCG field teams. No ability to write code. But one thing was clear: whether a system runs has nothing to do with how smart the executors are. It has everything to do with how clear the tasks are.
So I wrote specs. Module by module. Inputs, outputs, definition of done, definition of failure. Everything written down.
52 specs finished. Handed to agents. Four days later: 2,716 tests, all green.
These two things happened for the same reason. Not because AI got smarter. Because organization design and Agentic Engineering are two era-specific versions of the same problem.
The Question Mintzberg Asked
In 1979, Mintzberg asked a question. Not what an organization looks like. What an organization runs on.
These seem similar. They're not.
You can draw an org chart and see the reporting lines. What you can't see is what keeps work from distorting as it moves from one person to the next. That's what Mintzberg was after. He found five coordination mechanisms. Each with its own use case and cost.
Direct supervision. The boss watches. He sees what you're doing, tells you what's next, holds all the information. Works for five people. Not fifty. His attention is a hard constraint. The ceiling of the organization is the ceiling of the founder's bandwidth.
Standardization of work processes. Break the job into steps, write the steps into SOPs, train workers on the SOPs. McDonald's fries cook for a fixed number of seconds. The worker doesn't judge. The system runs on process. Precise, efficient, zero flexibility. Works for highly repetitive tasks in stable environments. The moment something falls outside SOP coverage, the whole system breaks.
Standardization of skills. A hospital doesn't tell doctors how to diagnose. It cares whether you have a license, whether you trained properly. Qualified people come in and coordinate through shared professional judgment. A surgeon and a specialist align through a common medical knowledge base, not a procedure manual. Flexible, high-quality — but only if you can define what "qualified" means and verify it.
Standardization of outputs. The divisional logic: headquarters doesn't care how you do it. Headquarters gives you a KPI. Hit it and I'm satisfied. This design resolves a core tension: centralized control wants predictability; local autonomy wants execution speed. These pull in opposite directions. If headquarters manages process, headquarters becomes the bottleneck. If headquarters fully delegates, headquarters loses control of outcomes. Output standardization is the middle path. I define what you deliver. You define how you get there. Hit the KPI, and headquarters doesn't need to know how.
Mutual adjustment. Mintzberg called this Adhocracy. Project-based work. Film crews. Consulting teams. Wartime command structures. Specialists assemble temporarily, driven by the task, coordinating through real-time communication, with one integrator holding the pieces together. Then they disband.
Why is this the hardest? Because mutual adjustment has the highest coordination cost of all five mechanisms. But its ability to handle uncertainty is also the highest. When the task is genuinely complex, the environment is genuinely dynamic, and there's no standard answer — only Adhocracy can respond. Because it pushes judgment down to whoever is closest to the problem.
Adhocracy needs someone in the middle. Not executing. Integrating. Someone who knows the overall goal, can break it into sub-tasks that specialists can pursue independently, can judge whether the pieces fit when they come back, and can make tradeoffs when specialists disagree. Mintzberg called this role the integrating manager.
These five mechanisms aren't history. They're the solution space. Humans spent centuries of real coordination failures to arrive at these five tools. Each one is the optimal solution for a specific class of coordination problem. Pick the wrong one and the cost is real. Where you need skill standardization, use process standardization instead and you'll suffocate your experts. Where you need output standardization, manage process instead and headquarters becomes a micromanagement hell. Where you need Adhocracy, use direct supervision and the founder drowns in information.
Mintzberg's framework isn't for drawing org charts. It's for diagnosing where coordination breaks down, and at which layer.
The Equation
When I wrote the first spec for the SFA project, I stopped.
What is a spec? An engineering requirements document. You describe what the module should do, the user scenario, the acceptance criteria.
Halfway through, I realized something. This isn't an engineering document. This is a job description plus a KPI.
A job description defines who handles what, where the role's boundaries are, who the upstream and downstream handoffs are.
A KPI defines what "done" looks like, how it's measured, how it's accepted.
A spec has the same structure. The only difference: the executor used to be a person. Now it's an agent.
Spec = Output standardization = KPI + Job description
Test = Acceptance mechanism = Performance review
Agent = Execution unit = Team member
Dependency = Handoff sequence = Role prerequisiteThe equation holds.
SFA-007 (RAG Pipeline) depends on SFA-006 (System Prompt Builder) finishing first. That's not a code dependency. It's a work handoff sequence. Same as not putting a new hire on the floor before their training is done. In organization design, this is called sequential dependency — downstream tasks can't start without upstream outputs.
The final 52 specs covered environment setup, LLM integration, knowledge base, sales visit management, AI coaching conversations, data privacy compliance, and internationalization. This was a complete organization. Division of labor, coordination mechanisms, acceptance criteria, error correction. The executors just happened to be agents.
A Multi-Agent system is Adhocracy. Each agent has a role boundary defined by its spec. They coordinate through context and interfaces, the way a film crew coordinates through scripts and shooting schedules. Nobody is in the middle passing messages. Agents hand structured outputs directly to each other.
And one role sits above, doing integration: deciding who runs first, who triggers whom, assembling the pieces at the end. In Adhocracy, that's the integrating manager. In a Multi-Agent system, that's the Orchestrator.
Two Capabilities
Most people think the core skill of an HRBP is writing job descriptions. It isn't. Writing descriptions is a tool. The real capability is Job Understanding — seeing through the surface to understand why a role actually exists.
During the M&A integration years at AB InBev, I inherited an 8,000-person compensation structure. My predecessor left filing cabinets stuffed with job descriptions. Every one of them looked good. Clear responsibilities, clean hierarchy. Then I started having conversations.
About a third of the roles had a gap between the official description and what the person actually did. The work on the page was one thing. The work in practice was another. A political commissar's job isn't to close that gap. It's to see the gap first.
First capability: meta-questions.
After managing a 21-person HR team, I developed a habit. When a direct report came in with a proposal, I didn't ask "what did you do." I asked:
"How do the best teams in the world handle this?" "Is there an industry standard?" "What's the rationale behind this design?"
Not to challenge. To build a judgment framework. If you have no standard of your own, you'll accept whatever answer you get. Accept a wrong answer and the mistake becomes yours. This is benchmarking. Ground your judgment in external reference points, not in feeling.
Now I use the same tool with agents. I don't understand Claude's underlying implementation. I don't know why it hallucinates in certain conditions. But I can ask: how do top engineering teams define acceptance criteria for this kind of system? Is there a more mature industry reference for this API design? What design principle does this error-handling pattern follow?
Claude will tell you. You read it once. Whether it makes sense is immediately obvious. The power of meta-questions: you're not asking about the implementation. You're asking about the standard. Standards are objective. They don't require you to understand the underlying code.
Second capability: dual-team pressure testing.
But a standard alone still depends on one agent's judgment. Any single judgment has blind spots. So I added a second.
Claude handles execution. It has the full session history, knows the system's past, knows my preferences, knows why the previous version was scrapped. Contextual continuity is its advantage.
The second agent handles review. It doesn't know the history. No emotional investment. No incentive to defend its own previous decisions. That ignorance is its advantage.
The M&A integration gave me this intuition. There was an iron rule: new organizational designs cannot be evaluated solely by the integration team. Independent perspective is mandatory. Not because you distrust your team — but because anyone deeply involved has a cognitive filter. They will tend to defend their choices, even when those choices have already drifted off course.
Same principle, applied to Agentic Engineering. Spec written by Claude. Acceptance reviewed by an independent agent. Claude knows what you intended to build. The second agent only sees what was built. The gap between them is the most important thing to watch — "doing the wrong thing correctly" is the hardest error to catch, because every individual step looks reasonable.
Together, these two capabilities give you one thing: your judgment doesn't get hijacked by the agents. Meta-questions let you know what "good" looks like even when you can't read the code. Dual-team pressure testing gives your judgment an independent check instead of relying on a single point.
Agents run faster than you. But you set the direction.
What This Means
The underlying equation between organization design and Agentic Engineering is established. Not as metaphor. As identity. The coordination problem didn't change. The executors did.
Mintzberg's five mechanisms map to five Agentic architectures. Output standardization is the theoretical foundation for specs. Adhocracy is the organizational prototype for Multi-Agent systems. The Orchestrator is the integrating manager.
This isn't saying HR professionals should go learn to write code. It's saying: if you want to do Agentic Engineering in the AI era, the capability you need isn't knowing how to write a for loop. It's knowing how to break a fuzzy objective into clear output standards. How to define acceptance criteria. How to design coordination mechanisms. How to build independent auditing. How to make a system run without you watching it.
These capabilities have traditional names: organizational diagnosis and architecture design. Now they're called Spec Writing and System Design. Different names. Same substance.
I delivered a 7–9 person-month project in 4 days. Not because I learned to write code. Because I spent 18 years learning to design systems that run without me watching.
That system used to be called an organization.
Now it's called an Agent team.

