Many people here point out that LLMs WILL be anthropomorphised, and I think that’s not a surprise, because it’s the most human-like thing other that humans themselves.

However, I think we should follow “do not anthropomorphise” by acknowledging that while LLMs have quite some reasoning skills, and might resemble some level of intent depending on what’s in their context, they don’t have “understanding” like humans do.

They are absurdly good, statistical next-token-predictors. Keeping that in mind is really helpful for coding, learning, advice, conversation or whatever else you use them for.

Anthropomorphising LLMs is inevitable, but we should do it somehow responsibly.

> Anthropomorphising LLMs is inevitable, but we should do it somehow responsibly.

One way would be for vendors to have the models give dry answers and less of the "That's a great question!" type response. Just keep it factual.