>Anyone who says they're able to review thousands of lines effectively that Claude might slop out in a day are lying to themselves.
The amount you can review before burning out is now the reasonable limit, for the same reason that a car is supposed to stay at the speed you can handle and not the max speed of the engine.
Of course, many people are secretly skipping reviews and some dare to publicly advocate for getting rid of them entirely.
> The amount you can review before burning out is now the reasonable limit
I realized this is the crux of our moment, because a variant of Amdahl's law applies to AI code gen.
{time gained} = {time saved via gen AI} - {time spent in human review}
There's no way that results in a positive number with 100% human review coverage, which means that human review coverage is headed to < 100% (ideally as low as possible).
I'm not sure that's as certain, I think just by virtue or LLMs being better copypasting/integrated stackoverflow you can get a speed boost (prompts like "generate a similar test to this one checking condition X").
The question is whether humans can sensibly judge the break even point and not generate faster than that. It's very easy to get lost in the woods and suddenly have a bunch of generated stuff you no longer grok.
> For the same reason that a car is supposed to stay at the speed you can handle and not the max speed of the engine.
As we know with driving, sensible drivers stick to the speed limit most of the time, but there's a good percentage of knuckle draggers who just love speeding, some people get drunk, some they just drive the wrong way down the highway entirely. Either way it's usually the sensible people who end up suffering.