Agreed. We should not be anthropomorphising LLMs or having them mimic humans.

It's inherent in the way LLMs are built, from human-written texts, that they mimic humans. They have to. They're not solving problems from first principles.

Maybe we should change that? Of course symbolic AI was the holy grail until statistical AI came in and swept the floor. Maybe something else though.