"You must spend a lot of time—more than 50%—writing testing code."
Extended reading: Why the test case library is the only reliable asset in a stateless world, see AI Statelessness and Context Window.
1. The Productivity Paradox: Why Nobody Wrote Tests in the Past
Before AI, everyone knew test code was important, but in reality, 99% of teams didn't write test code. The reasons were:
- Human resources were both sufficient and insufficient
- "Sufficient" meant: there were test engineers to help us test
- Insufficient meant: there were too many requirements. You couldn't even finish writing functional code, let alone test code
- If someone could help you test manually, why write test code?
This logic made sense in the past, but it doesn't work today.
2. AI "Flattens" Teams: Brain Capacity vs. Code Volume
With AI assistance today, everyone's productivity is too high, directly leading to reduced team size.
"No matter how strong AI is, human brains still need to have control over the project. Once you relax control, it will collapse immediately."
In the past, a project had a team of ten people. Everyone could share part of the work, and everyone worked slowly with long cycles, so everyone had plenty of time to understand and master every detail in the code.
But today, what ten people did in the past might be done by one person, and in one-third of the time.
This means: You need to remember and understand in one brain, in one-third of the time, the information that used to require "one team × three times the time" to master.
For AI-native teams, this is a huge challenge.
3. Using Massive Tests to Cover What Brains Can't Cover
In this situation, how do we ensure software quality? A very natural idea is:
"Use massive test code to cover the parts your brain can't cover."
- AI is very good at writing test code, because test code isn't difficult—it's pure grunt work
- Humans don't like doing grunt work, but AI has no problem with it
- We already know very clearly that AI-generated code is unreliable, and human brains can't fully review it
Under this premise, the only effective way to protect yourself is to use massive test cases to cover:
- When writing new features, you already ensure the new feature is likely correct
- In the future, when you modify other code, if you accidentally touch this part, you'll know immediately—use test cases to discover when AI oversteps and touches things it shouldn't
