this was true, but then it wasn't... the research world several years ago, had a moment when the machinery could reliably solve multi-step problems.. there had to be intermediary results; and machinery could solve problems in a domain where they were not trained specifically.. this caused a lot of excitement, and several hundred billion dollars in various investments.. Since no one actually knows how all of it works, not even the builders, here we are.

"Since no one actually knows how all of it works, not even the builders, here we are."

To me this is the most bizarre part. Have we ever had a technology deployed at this scale without a true understanding of its inner workings?

My fear is that the general public perception of AI will be damaged since for most LLMs = AI.

This is a misconception, we absolutely do know how LLMs work, that's how we can write them and publish research papers.

The idea we don't is tabloid journalism, it's simply because the output is (usually) randomised - taken to mean, by those who lack the technical chops, that programmers "don't know how it works" because the output is indeterministic.

This is not withstanding we absolutely can repeat the output by using not randomisation (temperature 0).

Humanity used fire for like a bazillion years before figuring out thermodynamics

Are you sure you're talking about LLMs? These sound more like traditional ML systems like AlphaFold or AlphaProof.