Is there some LLM meta where understanding and compression are argued to be the same thing I’m not aware of?
Anyone got more details on this?
Superficially it sounds like total BS; a highly compressed zip file does not exhibit any characteristics of learning.
Algorithmically derived highly compressed video streams do not exhibit characteristics of learning.
?
I’ve vaguely heard the learning can be considered to exhibit the characteristics of compression in that understanding of content (eg. segmentation of video content resulting in more highly compressed videos) can lead to better compression schemes.
…but saying you can “do a with b” and “a and b are fundamentally the same thing” seems like a leap…?
It seems self evident you can have compression without comprehension.
Suppose you wanted to train an LLM to do addition.
An LLM has limited parameters. If an LLM had infinite parameters it could just memorize the results of every single addition question in existence and could not claim to have understood anything. Because it has finite parameters, if an LLM wants to get a lower loss on all addition questions, it needs to come up with a general algorithm to perform addition. Indeed, Neel Nanda trained a transformer to do addition mod 113 on relatively few examples, and it eventually learned some cursed Fourier transform mumbo jumbo to get 0 loss https://twitter.com/robertskmiles/status/1663534255249453056.
And the fact it has developed this "understanding" as an ability to learn a general pattern in the training data enables it to compress. I claim that the number of bits required to encode the general algorithm is fewer than the number of bits required to memorize every single example. If it weren't then the transformer would simply memorize every single example. But if it doesn't have space then it is forced to try to compress by developing a general model.
And the ability to compress enables you to construct a language model. Essentially, the more things compress, the higher the likelihood you assign them. Given a sequence of tokens say "the cat sat on the", we should expect "the cat sat on the mat" to compress into fewer bits than "the cat sat on the door". This is because the latter is far more common and intuitively more common sequences should compress more. You can then look at the number of bits used for every single choice of token following "the cat sat on the" and thus develop a probability distribution for the next token. The exact details of this I'm unclear on. https://www.hendrik-erz.de/post/why-gzip-just-beat-a-large-l... this gives a good summary.
It’s exactly this kind of thinking that underlies lossless text compression (not exactly what a transformer guarantees but often what happens). For that reason, some people thought it would be fun to combine zip and transformers. https://openreview.net/forum?id=hO0c2tG2xL
The idea precedes LLMs by a couple of decades and is thought to apply more broadly within ML/AI than being a specific meta for LLMs. http://prize.hutter1.net/ has been around for a while, there is a link in there to the earlier work (called AIXI?).
Even something as simple as LZW starts developing a dictionary. Not all compression is sufficient for understanding, but the more you compress a stream of data, the more dependent you are on understanding the source, because understanding the source allows you to take more shortcuts and still be able to reconstruct the data.