> they lack some fundamental structuring that seems to be required to create anything like consistency or self-reflection

A valid observation. Interestingly, feeding the persona vectors detected during inference back into the context might be a novel way of self-reflection for LLMs.

Yeah, and this may be part of what the brain is doing - a referent check on our personal sense of identity to validate whether or not a response or action seems like the sort of thing we would do - “given that I’m this kind of person, is this the sort of thing I’d say?”

(Noting that humans are, of course, not universally good at that kind of “identity” check either, or at least not universally good at letting it be guided by our “better natures”)