> My suspicion is that when we eventually find our way to AGI, these types of models will be a _component_ of those systems
I think this is a good summary of the situation, and strikes a balance between the breathless hype and the sneering comments about “AI slop“.
These technologies are amazing! And I do think they are facsimiles of parts of the human mind. (Image diffusion is certainly similar to human dreams in my opinion), but still feels like we are missing an overall intelligence or coordination in this tech for the present.
I think this may also be why every discussion of the limitation of these models is met with a “well humans also hallucinate/whatever” - because we Do, but that’s often when some other part of the controlling mechanism has broken down. Psylocibin induces hallucinations by impairing the brain’s ability to ignore network outputs, and Kahneman and Tversky’s work on cognitive biases centers the unchecked outputs of autonomous networks in the brain - in both cases, it’s the failure or bypass of the central regulatory network that induces failure cases that look like what we see in LLMs.
The bitterest lesson is we want slop (or, "slop is all you need")
Maybe you can recognize that someone else loves a certain kind of slop, but if LLMs became vastly more intelligent and capable, wouldn't it better for it to interact with you on your level too, rather than at a much higher level that you wouldn't understand?
If you used it to make you a game or entertain you with stories, isn't that just your own preferred kind of slop?
If we automate all the practical stuff away then what is left but slop?