Future coding where developers only ever write the tests is an intriguing idea.

Then the LLM generates and iterates on the code until it passes all of the tests. New requirements? Add more tests and repeat.

This would be legitimately paradigm shifting, vs. the super charged auto complete driven by LLMs we have today.

Tests don’t prove correctness of the code. What you’d really want instead is to specify invariants the code has to fulfill, and for the AI to come up with a machine-checkable proof that the code indeed guarantees those invariants.