The comfort trap of AI agents
We are entering the age of agents.
Not just tools, not just copilots—but agents. Systems that act, decide, and increasingly do things for us. And to be honest, they are incredibly good at it.
Take driving. With systems like Tesla’s Full Self-Driving (FSD), the experience shifts from active control to passive supervision. The car steers, brakes, navigates. You just “watch.” Or at least, you’re supposed to.
Or take software engineering. Tools like Claude Code can review pull requests, suggest architectural changes, even generate production-ready code. Tasks that used to require hours of deep thinking can now be compressed into minutes.
This is the promise of agents: effortless leverage.
The Hidden Contract
But there is an implicit contract we don’t talk about enough:
The human is still responsible for the outcome.
Even when the agent acts, you own the consequences.
Tesla makes this explicit—FSD is still a Level 2 system, requiring constant human attention. Yet reality tells a different story. Investigations in 2026 linked multiple crashes, including a fatal one, to FSD behavior under certain conditions (Reuters). In another case, a former autonomous driving executive described how his Tesla crashed while FSD was engaged, emphasizing how easy it is to overtrust a system that works “almost perfectly” (Business Insider).
“Almost perfect” might be the most dangerous phrase in AI.
Because it changes human behavior.
The Drift Toward Passivity
Humans are not designed to stay vigilant in passive roles.
When a system performs well 95% of the time, we adapt. We relax. We stop double-checking. We stop questioning. We stop being in the loop.
This is not a flaw—it’s human nature.
But it creates a dangerous dynamic:
- The agent becomes more capable
- The human becomes less attentive
- The system still requires human intervention
That gap is where failures happen.
Research into autonomous systems shows that many incidents are tied not just to technical failure, but to this handoff problem—when humans are suddenly expected to take over from automation they’ve stopped actively monitoring.
And when that moment comes, it’s often too late.
When Convenience Becomes Risk
The real risk of AI agents is not that they fail.
It’s that they succeed—just enough.
Enough to earn trust.
Enough to build reliance.
Enough to quietly remove humans from the process.
Until one day, something breaks.
And when it does, we instinctively ask:
- “Why didn’t the AI handle it?”
- “Why didn’t anyone catch this?”
But the uncomfortable truth is:
We stopped looking.
The Illusion of Delegation
We like to think we’re delegating tasks to AI.
But in many cases, we’re actually delegating attention.
And attention is the one thing we can’t afford to outsource.
Because responsibility doesn’t scale down with effort.
If anything, it becomes more important.
A More Honest Mental Model
Maybe we need a better way to think about agents.
Not as replacements.
Not even as assistants.
But as high-speed amplifiers of intent—with unpredictable edge cases.
They can accelerate you.
But they can also amplify your blind spots.
Staying in the Loop
So what does this mean in practice?
It doesn’t mean rejecting agents. That’s unrealistic—and frankly, wasteful.
It means designing and using them with a different mindset:
- Treat outputs as proposals, not answers
- Stay actively engaged in critical decisions
- Assume failure is always possible
- Optimize for awareness, not just efficiency
Because the cost of being “out of the loop” is not linear.
It’s catastrophic.
Final Thought
AI agents are not removing responsibility.
They are repositioning it.
From doing → to overseeing.
From executing → to judging.
And that shift is harder than it looks.
Because the better the agent becomes,
the easier it is for us to disappear.
And that might be the biggest risk of all.