> AI gets trained on knowledge generated by AI?
This sounds like the ouroboros snake eating its own tail, which it is, but because of tool use letting it compile and run code, it can generate code for, say, rust that does a thing, iterate until it's gotten the borrow checker to not be angry, then run the code to assert it does what it claims to, and then feed that working code into the training set as good code (and the non-working code as bad). Even using only the recipe books you already had, doing a lot of cooking practice would make you a better cook, and once you learn the recepies in the book well, mixing and matching recepies; egg preparation from one, flour ratios from another, is simply just something a good cook would just get a feel for what works and what doesn't, even if they only ever used that one book.
The original recipe books don't cover all possible things that could be created, not even in all their combinations. And most importantly even the subset of novel combinations that can be created from the recipe books -- there is something missing.
What's missing is the judgement call of a human to say if some newly created information makes sense to us, is useful to us, etc.
The question above is not about whether new information can be created or the navigation of it. It's about the applicability of what is created to human ends.
In this analogy, the LLMs been given a machine that answers tasty for humans (y/n), given a food object. With such a device, you don't think that fills the missing gap?
> What's missing is the judgement call of a human to say if some newly created information makes sense to us, is useful to us, etc.
When I gave an example of a recipe book, that’s what I meant. There’s the element of not knowing whether something worked without the explicit feedback of “what worked”. But there is also an element of “no matter how much I experiment with new things, I wouldn’t know sous vide exists as a technique unless I already know and have it listed in the recipe book.” What I don’t know, I will never know.
Let's say that this metaphorical LLM chef never discovers or independently invents sous vide as a cooking technique, but they're able to whip up simply the most amazing desserts and the most wonderful air fluffed omelettes. Sure we can make fun of them, like a teenage boy making fun of the kid who's parents didn't let them take sex ed so now they don't know how pregnancy works, but if that LLM chef's desserts sell out when given a kitchen, is their lack of a sous vide technique really that much of an indightment that we should not only not let them into the kitchen in the first place, but also trample all over them, denigrate the people working on them, spit on it as being all hype, and walk away from it all? I'm not saying anyone should want to work at OpenAI, but the haters are worse than the hype men. If you don't want to give them money, don't.