An LLM is an autocorrect in as much as humans are replicators. Something seriously gets lost in this "explanation".

Humans do much more than replicate, that is one function we have of many.

What does an LLM do, other than output a weighted prediction of tokens based on its training database? Everything you can use an LLM for is a manipulation of that functionality.

> An LLM is an autocorrect in as much as humans are replicators.

an autocorrect... on steroids.

What are humans, fundamentally, then ?

That is a good questions and I guess we have good progress since Plato whose definition was - A man is a featherless biped.

But I think we still do not know.