> the country it's discovered in matters much less than the people it's loyal to. (If GPT or Grok become self aware and exponentially self improving, they're probably not going to give two shits about America's elected government.)
People are loyal (to whatever degree they're actually loyal), because it is a monkey virtue. Why would an AGI be loyal to anyone or anything? If we're not smart enough to carefully design the AGI because we're stumbling around just trying to invent any AGI at all, we won't know how to make loyalty fundamental to its mind at all. And it's not as if it evolved from monkeys such that it would have loyalty as a vestige of its former condition.
> Why would an AGI be loyal to anyone or anything?
I'm using the word loyal loosely. Replace it with controlled by if you prefer. (If it's not controlled by anyone, the question of which country it originates in is doubly moot.)
>Replace it with controlled by if you prefer.
Sure. But I don't think I'd trust the leash made by the same guy who accidentally invented God. At least, I wouldn't trust that leash to hold when he put it around God's neck.
While eventually, given some absurd amount of time, we might learn to control these things, will we learn to control them before we've created them? Could we survive long enough to learn to do that, if we create them first? Legislation and regulation might slow down their creation sufficiently *IF* we only needed 12 months or 5 years or whatever, but if we instead need centuries then those safeguards won't cut it.
My only consolation, I think, is that I truly believe humans far too stupid to invent AGI. Even now, there are supposed geniuses who think LLMs are a some stepping stone to AGI, when I see no evidence that this is the case. You're all missing the secret sauce.