> The best you'll see is an improvised post-hoc rationalization story.
Funny, because "post-hoc rationalization" is how many neuroscientists think humans operate.
That LLMs are stochastic inference engines is obvious by construction, but you skipped the step where you proved that human thoughts, self-awareness and metacognition are not reducible to stochastic inference.
I'm not saying we don't do post-hoc rationalization. But self-awareness is a trait we possess to varying degrees, and reporting on a memory of a past internal state is at least sometimes possible, even if we don't always choose to do so.