"AI cannot solve all problems. AI cannot solve the first mile and the last mile. This is essentially a human problem."
Extended reading: How debugging workflows demonstrate human-AI collaboration boundaries, see Debugging: Finding Bugs with AI in Deep Waters. For how test systems provide safety nets that enable human delegation, see Test–Code Loop: Why Test Code Is More Important Than Functional Code.
1. First Mile and Last Mile: The Two Ends AI Cannot Do
In the AI-native world, there are two things AI cannot do, or cannot do well, or shouldn't do:
- First Mile: Designing a truly reasonable solution
- Last Mile: Getting every line of code correct in a real environment
"The relationship between humans and AI is that of reviewer and executor, but humans cannot evaluate things they themselves don't understand."
The subtext of this statement is:
- You cannot hand a system you don't even understand to AI for "automatic maintenance"
- You must first have sufficient understanding of the system before you're qualified to evaluate whether AI is doing well
2. Human-in-the-Loop Is an Unavoidable Engineering Cost
Many people hope that one day they can achieve "full automation":
PRD → AI automatically breaks down tasks → automatically writes code → automatically deploys → automatically maintains.
The conclusion from practice is:
- Once this fantasy is introduced into real, complex, long-lived systems, it quickly collapses
- The more you rely on AI automation, the fewer people seriously review, the faster debt accumulates
The truly feasible and must-accept model is:
- Human-in-the-loop is present throughout:
- In requirement clarification phase
- When making key architectural decisions
- After each round of major changes and critical path changes
- AI can help you complete a lot of repetitive, physical, local work
- But for "overall direction" and "key decisions," humans must explicitly sign off
3. Onboarding: AI Cannot Train a Real Engineer
In an AI-native team, newcomers easily fall into two extremes:
Fed by AI:
- Ask AI everything, have AI write whatever is needed
- Don't build their own mental model of the system's real structure and boundaries
- When encountering deep waters that AI can't handle, completely lose direction
