> In fact, most of the time models fail to demonstrate introspection—they’re either unaware of their internal states or unable to report on them coherently.

You got the wrong takeaway from your link.

The parent said: "I argue that the model has no access to its thoughts at the time."

This is falsified by that study, showing that on the frontier models generalized introspection does exist. It isn't consistent, but is is provable.

"no access" vs. "limited access"

There is no way for a user to know whether the LLM has introspection in a given case or not, and given that the answer is almost always no it is much better for everyone to assume that they do not have introspection.

You cannot trust that the model has introspection so for all intents and purposes for the end user it doesn't.

I would say "limited and unreliable access". What it says is the cause might be the cause, but it's not on any way certain.