The Plumbing Phase
I still remember when AI tooling hit an all-time high when it came to one-shot prompting. The flagship model at the time was Claude Sonnet 3.5, and it was really good at writing code. The promise was clear: “We will build so many apps.” This is the ultimate intelligence level for product building, for making stuff.
But looking back a year or so later, that’s not what happened. People are not building AI apps. People are building apps for their AI.
At first it was automation workflows; everyone was on the n8n hype. Then people figured out they could run local models in agentic loops, and suddenly everyone was talking about their setups. YouTube videos about “my AI workflow.” Twitter threads about automation stacks. The ultimate peak was when OpenClaw got popular and people realized they could talk to their agents through WhatsApp and Telegram.
This might sound judgmental, but I’ve been guilty of it too. I’ve had my fair share of AI psychosis. I went from a single agent loop, to multiple agents with different personalities, to a whole mimic of The Office with different characters assigned to different responsibilities, all the way to the thing I’m still running now (which is an agent I can reach from Telegram that has access to most of my local tools).
This isn’t new though. The tech revolution is just like any other revolution. If we consider intelligence as a utility, then history is repeating itself. When electricity first became available, people spent years building generators and wiring cables before anyone figured out you could use them to run a toaster at home. We are in the plumbing era of intelligence.
What makes this era so addictive is that these agents are really good at improving themselves. Something fails, and the agent figures out why and fixes it. You end up in a very low-cost feedback loop where improvement feels free, so you keep going. You become a more demanding user because there is no visible limit. (Except tokens.)
That addictive feedback loop is actually us handing over more and more of our thinking to the agents we’re building. Every iteration, every piece of our life, digital or physical, we delegate a bit more of the analysis we used to do ourselves. The plumbing era is quietly turning into intelligence as a commodity.
We already see this in teams that have embraced the augmented path. Developers would rather ask the agent about an architecture question or a framework choice instead of asking a teammate, because “it’s faster.” You could argue it is. But there’s a novelty effect and a second-pair-of-eyes effect that gets lost. When someone new asks about an existing approach with no bias, they might catch what the team has gone blind to. An agent won’t do that.
For me, it means less control. When you delegate the one thing that makes you human; thinking, self-critique, having opinions; you lose control. Not of the tool, but of yourself.
The narrative is that cheap intelligence is only gain. But consider this: if thinking is what makes you human, then delegating it isn’t a productivity boost. It’s a cost. And the price is disguised as convenience.
The question turns from “What can AI do for me?” to “What am I giving up by letting it?”