LLMs have finite entropy (it is related to their training loss) and training typically doesn’t store the residuals.
Some compression methods use LLMs internally and also store the residuals, making them lossless.
LLMs have finite entropy (it is related to their training loss) and training typically doesn’t store the residuals.
Some compression methods use LLMs internally and also store the residuals, making them lossless.