I know this is kind of old hat by now, but it kind of blows my mind that I can upload a hand drawn decision tree & get a transcribed dot file back on consumer hardware using a pile of linear algebra that wasn’t even particularly specialised for this purpose, it’s just a capability that it picked up along with everything else during training.

If you had shown this to someone in 2018 they wouldn't have hesitated to call it an AGI. We truly reached the state where we have one model that performs at usable levels across a huge range of tasks. You don't need to assemble a training set of hand drawn diagrams and corresponding dot files and train some kind of CNN on that, you just throw the task at a preexisting LLM and get a usable result.

We always talk about the negatives (in most tasks it's worse than a human domain expert, the results are soulless, the societal implications are scary), but this kind of generality really is a monumental achievement

I totally agree. The feeling you get by running these things locally is different, as if you could feel the magic closer.

Well this is the magic of LLMs, they learn things incidentally well, and specialised models are pretty average.