I don't understand why "it's just predicting words, bro" is still seen as a valuable argument. A LOT has to happen to accurately predict the next word(s) for any given topic.

If that supposed to be a dismissal, it's not a good one.

Because people think it's "intelligent" because it's manipulating words and you get people like Andrew Yang and Elon Musk getting one-shotted by it.

Yes, it can solve a lot of things, but an LLM isn't going to put everyone out of work, the thing after the LLM will.

You sound exactly like Andrew Yang, the one you are criticizing, with confident sounding predictions but no substances.

The burden of proof lies on the side making claims about what AI will do, not the ones denying it.