Why do people not believe that LLMs are invertible when we had GPT-2 acting as a lossless text compressor for a demo? That's based on exploiting the invertibility of a model...
https://news.ycombinator.com/item?id=23618465 (The original website this links to is down but proof that GPT-2 worked as lossless text compressor)