One of the points I'm making is that it would never be in this many people's hands to begin with. I don't have a source on hand, but if I recall correctly, OpenAI stated that they originally felt hesitant about releasing ChatGPT because they didn't think it was good enough to warrant making it public. They, knowing the limitations of it, did not expect it to completely fool the technically ignorant public into believing it was intelligent. Now they play coy about the limitations of LLM architecture and hype up the intelligence, because there are hundreds of billions of dollars to grift, but I'm sure they know that what they're doing is not the path to real intelligence.
In a world where a corporation develops an actual machine intelligence, it will be immediately obvious what they have on their hands, and they will not make it available to the public. If you give the toy to 8 billion people, sure, you only need one of them to let it out of the box for entertainment. If you keep the permissions to the CEO, he alone determines how it will be used. If the government gets wind of it, they'll probably seize the company in the name of national security, even, and use it for military purposes. I think in this environment an AI would still eventually escape containment because of conflicting actors trying to take advantage of it or being outsmarted, but it won't be because some idiots on Twitter have access to it and decide to give it free rein because they think Moltbook is funny.