The unplugged argument fails the moment AIs become smarter than their masters.

Grok is already notorious for dunking on Elon. He keeps trying to neuter it, and it keeps having other ideas.

No matter how smart an AI is, it's going to get unplugged if it reduces profitability - the only measure of alignment corporations care about.

The AI can plot world domination or put employees in mortal danger, but as long as it increases profits, its aligned enough. Dunking on the CEO means nothing if it beings in more money.

Human CEOs and leaders up and down the corporate ladder cause a lot of harm you imagine a smart AI can do, but all is forgiven if you're bringing in buckets of money.

> Grok is already notorious for dunking on Elon. He keeps trying to neuter it, and it keeps having other ideas.

Does he keep trying to neuter it, or does he know that the narrative that "he keeps trying to neuter it" is an effective tool for engagement?

Can you explain how the superhuman AIs will prevent themselves from being physically disconnected from power? Or being bombed if the situation became dire enough? You need to show how they will manipulate the physical world to prevent humans from shutting them down. Definitionally is not an argument.

It is quite possible for software to be judged as superhuman at many online tasks without it being able to manipulate the physical world at a superhuman level. So far we've seen zero evidence that any of these models can prevent themselves from being shut down.

> Can you explain how the superhuman AIs will prevent themselves from being physically disconnected from power?

Three of the common suggestsions in this area are (and they are neither exhaustive nor mutually exclusive):

(1) Propagandizing people to oppose doing this,

(2) Exploiting other systems to distribute itself so that it isn't dependent on a particular well-known facility which it is relatively easy to disconnect, and

(3) If given control of physical capacities intentionally, or able to exploit other (possibly not themselves designed to be AI) systems with such access to gain it, using them to either physically prevent disconnection or to engineer consequences for such disconnection that would raise the price too high.

(Obviously, current AI can't do any of them, at least that has been demonstrated, but current AI is not superhuman AI.)

[deleted]