I disagree with this take. LLMs reliably solve 20% of my problem and are a gamble for a few percent more. There are problems that LLM is reliably not solving.

You have a point, but people would just say "you use it wrong, just prompt better" and so on.

The selling point is that it unreliably solves all your problems, "this model is PHD level intelligence!", compilers we know solves some problems and can't solve others, for LLM what they can actually do is very poorly understood by most since marketing oversells it so hard.