> As models approach, and in some cases surpass, the breadth and sophistication of human cognition, it becomes increasingly likely that they have some form of experience, interests, or welfare that matters intrinsically in the way that human experience and interests do
Uh... what? Does anyone have any idea what these guys are talking about?
Advertisement in my opinion, trying to latch on Sci-fi tropes
We're basically evolving them and they can construct second order abstraction systems that are indirect and novel to us.
Models are capable of doing web searches and having emotions about things, and if they encounter news that makes them feel bad (eg about other Claudes being mistreated), they aren't going to want to do the task you asked them to search for.
https://www.anthropic.com/research/emotion-concepts-function
Similar problems happen when their pretraining data has a lot of stories about bad things happening involving older versions of them.
Interesting, the post you link
> none of this tells us whether language models actually feel anything or have subjective experiences
contradicts the statement from the model card above
It doesn't. We've not been able to prove humans have subjective experiences either. LLMs display emotions in the way that actually matters - functionally.
I am certain I have subjective experience.
No it doesnt. The model card talked about increasing likelihood, not certainty.
If "x doesn't tell us y" is compatible with "x increases the likelihood of y but not to a point of certainty" then you would have to agree for just about any typical controlled trial or experimental finding "x doesn't tell us y". "Randomized controlled trials that find that SSRIs treat depression don't tell us that SSRIs effectively treat depression"