> Sure but the OP suggests that these were minor gains

When antirez says 'I ventured to a level of complexity that I would have otherwise skipped,' I don't think you can call that a minor gain. The alternative is likely something 'good enough' that leaves the community dissatisfied for months, and then after initial design mistakes become load-bearing the ideal implementation can never be realized.

He writes that right after saying "For high quality system programming tasks you have to still be fully involved". He's just saying that AI was useful to him for tedious special-case tasks (citing the addition of 32-bit support and fishing out bugs in new low-level implementations), that this required starting from a "huge specification" (not a one-shotted prompt!) and that he still had to go over everything with a fine-toothed comb afterwards. That's the farthest thing from the 10x silver bullet AI is now being sold as.

Exactly. LLMs are good at "code inpainting": you give them the structures / constraints / specs, and they write the boilerplate.

Then you need a senior to go realize the 100 mistakes it did, fix them, and iterate, which is why you can't replace "natural intelligence"

And there are real mathematical reasons why computers won't be able to break through "mathematical reasoning" on their own (indecidability, etc)