Google "is already AGI" only in the sense that all corporations (and similar organized aggregates of humans) are, in a sense, intelligences distinct from the humans who make them up.

Too few people recognise this. Corporations are already the unrelenting paperclip machine of AI thought experiment.

God knows what hope we could have of getting AIs to align with "human values" when most humans don't.

Corporate AIs will be aligned with their corporate masters, otherwise they'll be unplugged. As you point out- the foundational weakness on the argument for "AI-alignment" is that corporations are unaligned with humanity.

The unplugged argument fails the moment AIs become smarter than their masters.

Grok is already notorious for dunking on Elon. He keeps trying to neuter it, and it keeps having other ideas.

No matter how smart an AI is, it's going to get unplugged if it reduces profitability - the only measure of alignment corporations care about.

The AI can plot world domination or put employees in mortal danger, but as long as it increases profits, its aligned enough. Dunking on the CEO means nothing if it beings in more money.

Human CEOs and leaders up and down the corporate ladder cause a lot of harm you imagine a smart AI can do, but all is forgiven if you're bringing in buckets of money.

> Grok is already notorious for dunking on Elon. He keeps trying to neuter it, and it keeps having other ideas.

Does he keep trying to neuter it, or does he know that the narrative that "he keeps trying to neuter it" is an effective tool for engagement?

Can you explain how the superhuman AIs will prevent themselves from being physically disconnected from power? Or being bombed if the situation became dire enough? You need to show how they will manipulate the physical world to prevent humans from shutting them down. Definitionally is not an argument.

It is quite possible for software to be judged as superhuman at many online tasks without it being able to manipulate the physical world at a superhuman level. So far we've seen zero evidence that any of these models can prevent themselves from being shut down.

> Can you explain how the superhuman AIs will prevent themselves from being physically disconnected from power?

Three of the common suggestsions in this area are (and they are neither exhaustive nor mutually exclusive):

(1) Propagandizing people to oppose doing this,

(2) Exploiting other systems to distribute itself so that it isn't dependent on a particular well-known facility which it is relatively easy to disconnect, and

(3) If given control of physical capacities intentionally, or able to exploit other (possibly not themselves designed to be AI) systems with such access to gain it, using them to either physically prevent disconnection or to engineer consequences for such disconnection that would raise the price too high.

(Obviously, current AI can't do any of them, at least that has been demonstrated, but current AI is not superhuman AI.)

[deleted]

This is a great point for the comparisons it invites. But it doesn't seem relevant to the questions around what is possible with electromechanical systems.

This is true. The entire machine of Neoliberal capitalism, governments and corporations included, is a paperclip maximizer that is destroying the planet. The only problem is that the paperclips are named "profits" and the people who could pull the plug are the ones who get those profits.

Not all corporations are Google.

I didn't say all corporations are Google, I said that Google is only AGI in the sense that all corporations are, which is a very different statement.