To me, a lot of what makes us sentient is our continuity. I even (briefly) remember my dreams when I wake up, and my dreams are influenced by my state of mind as I enter it.

LLMs 'turn on' when given a question and essentially 'die' immediately after answering a question.

What kind of work is going on with designing an LLM type AI that is continuously 'conscious' and giving it will? The 'claws' seem to be running all the time, but I assume they need rebooting occasionally to clear context.

I think you're right, but also that LLMs are showing that sentience isn't necessarily required for AGI.

For exactly the reasons you mention, I don't expect sentience to arise out of LLMs. They have nowhere for an interiority or mind to live. And even if there were a new generation of transformers that did have some looping "mind", where they could "think about" what they're "thinking about", their concepts of things wouldn't really correspond to... things. Without senses to integrate knowledge across domains they're just associating text.

I haven't heard about anyone creating trying to create model that have an interior loop and also integration with sensory input, but I don't expect we would unless it ends up working.