> They say it doesn't have that much to do with the kind of consciousness you're talking about

Not much but it likely has something to do with it, so experiments on access consciousness can still be useful to that question. You seem to be making an implication about their motivations which is clearly wrong, when they've been saying for years that they do care about (phenomenal) consciousness, as bobbylarrybobb said.

Yes, they do care about it, and unlike many AI researchers they've bothered to learn something about philosophy of mind. They point out that "the philosophical question of machine consciousness is complex and contested, and different theories of consciousness would interpret our findings very differently. Some philosophical frameworks place great importance on introspection as a component of consciousness, while others don’t." Which would be one reason they point out that these experiments don't help resolve the issue.

They go further on their model welfare page, saying "There’s no scientific consensus on whether current or future AI systems could be conscious, or could have experiences that deserve consideration. There’s no scientific consensus on how to even approach these questions or make progress on them."

https://www.anthropic.com/research/exploring-model-welfare

On what grounds do you think it likely that this phenomenon is at all related to consciousness? The latter is hardly understood. We can identify correlates in beings with constitutions very near to ours, which lend credence (but zero proof) to the claim they're conscious.

Language models are a novel/alien form of algorithmic intelligence with scant relation to biological life, except in their use of language.