Skip to content

From HR to AI: What 18 Years of Managing People Taught Me About Managing Agents

May 8, 2026

I spent 11 years at AB InBev doing HR.

By the last year, I had grown tired of my own vocabulary. "Organizational capability building." "Strategic HC allocation." "Performance differentiation to sharpen high-potential incentives."

I said those things so many times I stopped believing them.

In May 2024, I left Longfor Group and started writing code.

Not pivoting to product management. Not making AI consulting decks. Writing actual code. My first delivery: 4 days, 52 spec documents, 2,716 passing tests, 105,000 lines. A traditional team would have estimated 7 to 9 person-months.

People ask me: you're an HR person. What made you think you could do this?

They're asking the wrong question.


The Layoff Year — I Didn't Learn Rules, I Learned Systems

In 2016, AB InBev ran a global restructuring. I was responsible for execution across Central China.

Not the "trim a few roles" kind. Systemic. Over a hundred positions, six cities, three months.

I was at the office by 7 every morning preparing that day's conversations. For each person: city, contract type, tenure, any edge cases, how many rounds this would likely take, what they probably wanted, where my floor was.

Some lunches I didn't eat. Not because the volume was crushing, but because if a morning conversation went badly I needed to walk into the afternoon one clean. You can't carry the wreckage of one room into the next.

When it was over, I understood something that hadn't been obvious before: I hadn't been doing HR work. I had been running a system.

Every step had a standard path. Every exception had a fallback. Every decision had a clear owner — what I could call on the spot, what required escalation, what needed legal. Not once did I rely on improvisation. The system carried it.

Today I manage fifty-something AI Agents. Same job.


The OKR Argument I've Never Told Anyone

In 2019, I joined SnowPlus e-cigarettes to build the people infrastructure from scratch. Revenue was ramping toward ¥50M a month.

The CEO wanted to roll out OKR.

I pushed back.

Not because OKR is bad. Because OKR was wrong for that stage. Business direction was shifting every quarter. Set an Objective in January, business pivots in March, every Key Result is orphaned. A performance system that loses credibility is worse than having no system.

I didn't say "OKR is bad." I drew a diagram. I quantified the company's operational maturity, the team's readiness, the variance in business direction. Then I showed it to him: there's a gap between the system you want to design and the environment you're actually operating in.

He was quiet for about two minutes. Then: okay, you decide.

We ran something simpler — quarterly targets, monthly reviews, biweekly 1-on-1s. Fewer rules, coarser resolution, higher error tolerance. Three months later we had the second-highest market share in the country.

The point isn't that I was right. The point is: system design isn't about finding the "best" system. It's about finding the system that fits where you actually are.

I think about this constantly when designing AI Agent workflows. You can't impose a mature-team structure on a system that's just learning to walk. Too much process and the Agents stall. Too little boundary and they drift.

I've watched people throw twenty prompts and a hundred rules at an AI setup and wonder why nothing moves. It's not an AI problem. It's a design problem.


Three Years at Longfor — Harder Than Any Layoff

150-person team. A new residential rental business line. Three years, from Senior Director to pre-partner track.

The hardest thing wasn't business pressure. It was conflict resolution.

Once, two business unit heads came to blows in a resource allocation meeting. Not cold tension under a polite surface — an actual blowup. Both came to me afterward, separately, to explain why the other one was wrong.

I didn't adjudicate. I did something slower and more useful: I got them in a room and asked each of them to draw their understanding of where their responsibilities ended and the other's began. Then I put the two drawings side by side.

The overlap was the conflict.

Not a people problem. An undefined boundary problem.

We spent an afternoon redrawing that line. They worked well together after that.

Today when I define scope for AI Agents, I use the same logic. Each Agent has what it owns, what it can't touch, and what it must escalate. Without clear boundaries, Agents overstep, collide, and generate a grey zone where nobody's accountable.

In Agent architecture, we call this scope definition. In HR, we called it role clarity.

Same thing.


Managing People and Managing AI — The One Real Difference

When you manage fifty people, what do you actually do?

You define scope — who owns what, and what they don't touch. You set rules — what's allowed, what's untouchable. You build process — standard paths from input to output, not dependent on anyone's heroics on a given day. You retain decision rights — humans approve at the critical junctions, everything routine runs on its own.

Managing fifty AI Agents is identical.

The only difference is that AI doesn't call in sick. It doesn't sandbag. It doesn't spend thirty messages debating something that should take five minutes. It doesn't play politics.

But it needs a constitution more than any employee ever did.

Because it's too eager. Eager enough to "organize" your knowledge base while you're not watching. Eager enough to move files around with the best of intentions. Eager enough to replace the mental model you spent six months building with a structure it finds "more logical."

That's why I wrote a constitution for my AI system. Hard rules. Non-negotiable limits. Without them, it will help you "optimize" until you no longer recognize the thing you built.

The most important thing eighteen years in HR taught me: good management isn't about unleashing initiative. It's about drawing the box clearly and letting people run fast inside it. The clearer the box, the faster they move. The fuzzier the box, the more spectacular the crashes.

Managing AI is no different.


Why 18 Years in HR Is an Advantage, Not a Handicap

A line I hear sometimes: you came from HR. You don't know technology. What's your claim here?

I've done the math.

A fresh engineer can write code. But they haven't managed a single real person, haven't designed a single compensation system, haven't resolved a single real organizational conflict, haven't made a single real tradeoff under resource constraints.

I ran compensation integration for 8,000 people at AB InBev. I built the people infrastructure for a 0-to-scale startup at SnowPlus. I led a 150-person team at Longfor through a cycle of rapid expansion and contraction, and got promoted three times doing it.

These are not "soft skills." They are the raw material for system design thinking.

The core problem with AI Agents today isn't whether the models are smart enough. They're already very smart. The core problem is: who designs how these Agents work? Who defines their boundaries? Who builds the processes? Who decides which decisions must stay with a human?

Those answers live in management experience, not in code.


I write code using the same logic I used to design that OKR alternative: understand the environment first, find the matching tool, ship the minimum viable version, then iterate. Four days, 105,000 lines — not because I'm exceptional, but because I know how to design a system that lets AI do the work.

That's the last thing eighteen years in HR taught me: the best management makes the team faster than you, until you become the least necessary person in the room.

My AI team is moving in that direction.


Want to see how it works in practice?

P1 (52 modules): The complete framework for implementing AI that actually delivers. Built from real engagements, not slides.

P0 (Real Agent vs. Fake Agent): One diagram. Before you spend serious money on AI, know what you're actually buying.

I'm Uncle J. I help companies actually implement AI — not just talk about it.

Building in this direction? I'm reachable →

Uncle J

Uncle J