> Coding AI can write tests, write code, compile, examine failed test cases, search for different coding solutions that satisfy more test cases or rewrite the tests, all in an unsupervised loop. And then whole process can turn into training data for future AI coding models.

This is interesting, but doesn't it still need supervision? Why wouldn't it generate tests for properties you don't want? It seems to me that it might be able to "fill in gaps" by generalizing from "typical software", like, if you wrote a container class, it might guess that "empty" and "size" and "insert" are supposed to be related in a certain way, based on the fact that other peoples' container classes satisfy those properties. And if you look at the tests it makes up and go, "yeah, I want that property" or not, then you can steer what it's doing, or it can at least force you to think about more cases. But there would still be supervision.

Ah -- here's an unsupervised thing: Performance. Maybe it can guide a sequence of program transformations in a profile-guided feedback loop. Then you could really train the thing to make fast code. You'd pass "-O99" to gcc, and it'd spin up a GPU cluster on AWS.