> Which LLMs do not. They fake it really well but it’s still an illusion. No understanding is going on, they don’t really know what you mean and don’t know what the right answer is.
A tree falling in a forest with nobody to hear it: if it makes a "sound", you think "sound" is the vibration of air; if it does not, you think "sound" is the qualia.
"Understanding" likewise.
> The ship’s computer on Star Trek could run diagnostics on itself, the ships, strange life forms and even alien pieces of technology.
1. "Execute self-diagnosis script" doesn't require self-reflection or anything else like that, just following a command. I'd be surprised if any of the big AI labs have failed to create some kind of internal LLM-model-diagnosis script, and I'd be surprised if zero of the staff in each of them has considered making the API to that script reachable from a development version of the model under training. No reason for normal people like thou and I to have access to such scripts.
2. Not that the absence says much. If humans could self-diagnose our minds reliably, we wouldn't need therapists. This is basically "computer, send yourself to the therapist and tell me what the therapist said about you".
> When the Star Trek computers behaved inconsistently like that (which was rare, rather than the norm), they would (rightly) be considered to be malfunctioning.
Those computers (and the ships themselves) went wrong on such a regular basis on the shows, that IRL they'd be the butts of more jokes than the Russian navy.