The Stronger AI Gets, the More Your Judgment Matters
There's a version of the AI future that most people in tech are building toward.
In it, agents handle the execution. They write the code, draft the emails, book the meetings, research the options, synthesize the documents. The human's job shrinks to direction-setting and review.
I think this future is probably coming. And I think it creates a problem almost nobody is talking about.
What happens when execution is cheap
When execution is expensive — when doing the thing requires significant human time and effort — there's a natural forcing function on judgment. You can't afford to work on the wrong thing for long. The cost of a bad decision shows up fast.
When execution becomes cheap, this forcing function disappears.
With agents, the cost of pursuing a bad idea is no longer the hours spent building it. It's just... a prompt. A minute. And then the agent starts executing.
This is powerful. It's also dangerous in a specific way: the gap between "had an idea" and "the idea is in motion" collapses. Judgment no longer has the natural friction of effort to protect it.
The bottleneck shifts
Software people understand bottlenecks. When you remove one constraint, the next constraint becomes the binding limit.
For most of knowledge work, execution has been a significant constraint. Agents are removing it. The constraint that becomes binding is judgment: figuring out what to build, what to prioritize, what to stop, what actually matters.
This is already happening. I talk to people who are shipping code faster than they ever have — and feeling more anxious, not less. The speed of execution has outrun their ability to judge which direction is right.
More throughput without better judgment doesn't produce better outcomes. It produces more outcomes faster, in directions that may or may not be good.
The environment isn't keeping up
Here's the part that concerns me.
As we build more powerful execution infrastructure, the environments where judgment forms — browsers, feeds, reading surfaces — are still designed to erode it.
You open a browser to think through a problem. The tab bar is full of interruptions. The news site you check for reference has seventeen things competing for your attention before you find the paragraph you needed. The social platform where you look for signal drowns it in noise.
We're sharpening the execution layer while the judgment layer degrades.
And the people building AI tools are, by and large, building more execution infrastructure. Faster agents, better code generation, more capable models. Almost nothing is being built to protect the cognitive environment in which humans decide what those agents should do.
