Can we please stop anthropomorphizing LLMs? It is extremely unhealthy and seems like it feeds into people's irresponsible use of a tool that could otherwise be useful if we stopped treating prediction machines like what they are not.