> I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".

That is almost exactly what they are and what you should treat them as.

A lossy compressed corpus of publicly available information with a weight of randomness. The most fervent skeptics like to call LLMs "autocorrect on steroids" and they are not really wrong.

An LLM is an autocorrect in as much as humans are replicators. Something seriously gets lost in this "explanation".

Humans do much more than replicate, that is one function we have of many.

What does an LLM do, other than output a weighted prediction of tokens based on its training database? Everything you can use an LLM for is a manipulation of that functionality.

> An LLM is an autocorrect in as much as humans are replicators.

an autocorrect... on steroids.

What are humans, fundamentally, then ?

That is a good questions and I guess we have good progress since Plato whose definition was - A man is a featherless biped.

But I think we still do not know.