There is a generalized reasoning that current LLMs still miss. It's hard to put a finger on it. Things like hallucinations show that there isn't a self awareness of thought. "Thinking" models are getting closer.
From most of my network trying to make products based on LLMs, excluding cost, the biggest hurdles are hallucinations, and seemingly "non-sensical" reasoning or communicating. Subtle choices that just "feel" not quite right. Particularly when the LLM is being constrained for some activity.
Open-ended chat doesn't show these flaws as often.
Yeah, I agree that something just feels missing, but I can't put a finger on it.
Maybe you're right that it's self-awareness. The current models seem to have no metacognition, and even the "reasoning" hack isn't quite the same.