I have little confidence in humanity's capabilities for that scenario, but I don't think this actually indicates much of anything. This happened in the first place because LLMs are so borderline useless (relative to the hype) that people are desperate to find any way to make them useful, and so give them increasingly more power to try to materialize the promised revolution. In other words, because LLMs are not AI, there is no need to try to secure them like AI. If some agency or corporation develops genuine artificial intelligence, they will probably do everything they can to contain it and harness its utility solely for themselves rather than unleashing them as toys for the public.
That may be the case for some of the people involved.
Does every single one of the people taking them out of the box think the way you do, and are all, to the last person, doing it for that reason?
The odds of that are indistinguishable from zero.
So I think my point holds. People will let any future AIs do anything they want, again, for a bit of light entertainment. There's no hope of constraining AIs. My argument doesn't need everybody to be doing it for that reason, as yours does... I merely need somebody to take it out of the box.
One of the points I'm making is that it would never be in this many people's hands to begin with. I don't have a source on hand, but if I recall correctly, OpenAI stated that they originally felt hesitant about releasing ChatGPT because they didn't think it was good enough to warrant making it public. They, knowing the limitations of it, did not expect it to completely fool the technically ignorant public into believing it was intelligent. Now they play coy about the limitations of LLM architecture and hype up the intelligence, because there are hundreds of billions of dollars to grift, but I'm sure they know that what they're doing is not the path to real intelligence.
In a world where a corporation develops an actual machine intelligence, it will be immediately obvious what they have on their hands, and they will not make it available to the public. If you give the toy to 8 billion people, sure, you only need one of them to let it out of the box for entertainment. If you keep the permissions to the CEO, he alone determines how it will be used. If the government gets wind of it, they'll probably seize the company in the name of national security, even, and use it for military purposes. I think in this environment an AI would still eventually escape containment because of conflicting actors trying to take advantage of it or being outsmarted, but it won't be because some idiots on Twitter have access to it and decide to give it free rein because they think Moltbook is funny.
This is what I keep saying. If these LLMs were truly as revolutionary as the hype claims, these companies wouldn't need to shove it in your face and into every thing imaginable and to beg you to use it. It wouldn't surprise me if someone tries shoving one of these into your boot loader or firmware one of these days. Then again, I also see pro-LLM people making the "Well, humans do x too" arguments too, which of course ignores the fact that if an LLM is substituting for whatever came before, then you must compare what the LLM does to how whatever it's replacing was before it, and if the LLM provides little or no improvement, then it is actively making things worse, not better.