What current generation LLMs are doing is like being trained on a dataset of human dances and then the users somehow expect it to do more than replicate the dances it has already seen. It is supposed to replicate the internal brain state of a human just from seeing the dances, but if it ever came up with a dance that isn't in the dataset, it will be punished. Finally, people expect it to be intelligent, because humans are just dance move predictors and intelligence is equivalent to dance move prediction, it should now do something that it was explicitly punished not to do, i.e. come up with new dances.