I know anthropomorphizing LLMs has been normalized, but holy shit. I hope the language in this article is intentionally chosen for a dramatic effect.
I know anthropomorphizing LLMs has been normalized, but holy shit. I hope the language in this article is intentionally chosen for a dramatic effect.
The thing is .. what else can you do? All the advice on how to get results out of LLMs talks in the same way, as if it's a negotiation or giving a set of instructions to a person.
You can do a mental or physical search and replace all references to the LLM as "it" if you like, but that doesn't change the interaction.
Fascinating. This is invisible to me, what anthropomorphising did you notice that stood out?
Agreed. We should not be anthropomorphising LLMs or having them mimic humans.
It's inherent in the way LLMs are built, from human-written texts, that they mimic humans. They have to. They're not solving problems from first principles.
Maybe we should change that? Of course symbolic AI was the holy grail until statistical AI came in and swept the floor. Maybe something else though.