The article I am responding to (which I've read) shows that these LLMs come with all sorts of hacks (= context bits) to make it behave more like this or more like that.

There is probably a whole testing workflow at AI companies to tweak each new model until it "looks" acceptable.

But they still don't understand what they are doing. This is purely empirical.

It's interesting to think about what the process will look like when we do understand them. I imagine pulling bits of LLM off the shelf like libraries and compiling them together into a functioning "brain", precisely tailored to your needs.