> (because we literally know how LLMs work) nothing related to consciousness or sentience or thought.
1. Do we literally know how LLMs work? We know how cars work and that's why an automotive engineer can tell you what every piece of a car does, what will happen if you modify it, and what it will do in untested scenarios. But if you ask an ML engineer what a weight (or neuron, or layer) in an LLM does, or what would happen if you fiddled with the values, or what it will do in an untested scenario, they won't be able to tell you.
2. We don't know how consciousness, sentience, or thought works. So it's not clear how we would confidently say any particular discovery is unrelated to them.