I think I'm relatively neurotypical, and I understand the technology sufficiently, yet I still have to force myself not to think of a chatbot as a being.

For example, sometimes I hesitate for a fraction of a second before typing a prompt that may sound stupid. I have to immediately remind myself that it's just a chatbot and I don't care what it thinks of me. In fact, it's not even thinking of me at all.

That hesitation indicates the feeling that what you are about to type matters.

Mayhapse - in the context of getting the AI to behave as you wish - such hesitations are valid. not because it is conscious: but because the context window would be polluted or corrupted... possibly mis-aligning the agent in the process.

Santa clause is not a being: modeling him as if he were can be useful, an obviously pointed example is in certain discussions about what it means to be 'real'.

My point is, if your instinct is to be kind: don't quash that because you don't consider what you are talking to as sentient. I don't yell at my rubber duck. rubber ducky is just going to rubber ducky.

I buy that.

1. To the extent that a chatbot is trained on real human interaction, we should exhibit real human interactions for best result.

2. You are either a kind person or not. A kind person behaves kindly without asking whether kindness is warranted.