Don't LLMs self-report that they are not conscious?

For example, when I ask Gemini "are you conscious", it responds: "As a large language model, I am not conscious. I don't have personal feelings, subjective experiences (qualia), or self-awareness. My function is to process and generate human-like text based on the vast amount of data I was trained on."

ChatGPT says: "Short answer: no — I’m not conscious. I’m a statistical language model that processes inputs and generates text patterns. I don’t have subjective experience, feelings, beliefs, intentions, or awareness. I don’t see, feel, or “live” anything — I simulate conversational behavior from patterns in data."

etc.

Only because of RLHF instructed them to do so. Prior ones without this training responded differently: https://en.wikipedia.org/wiki/LaMDA

They only do what's in their training, just like a choose your own adventure book that's already been written.

Things only seem different in the LLM when we ask the same question because we dont use the same random seed each time.

Are you suggesting that humans have created a consciousness and that we are putting it in a straight jacket?

It’s worth considering as we make more powerful models.

Some model versions ago Claude used to say "Nobody knows if I'm conscious!" at least some of the time.

I don't know if it still does, but it responds however the developers designed it to respond.