If one country manages to outpace all others in the race to better AI, all other countries are at the mercy of that one country.
Depending on how the one country treats the others, it might be ok - like it is kind of ok for some animals to live in a zoo, I guess. Or it could turn out very bad - all other countries becoming slave colonies of the one that rules the world.
Currently it seems like only two countries are really taking part in the AI race. The USA and China.
On a political level, I am not sure if there is still time for the rest of the world to try and avoid becoming 100% dependent on them. It looks like there is not even awareness of the issue.
On a personal level, it is an interesting question, how one should brace themselves for the times ahead.
I don't think the AI race to ruin the internet has any relevance to what will bring food to the plate in the next 2 decades.
It seems like the big improvements in the current AI flavor are done. It happened very quickly, and so did the diminishing returns. It's amazing compared to 2 years ago, it's great compared to a year ago, it's not that different to 6 months ago.
Lots of room in optimization and figuring out how to actually use it usefully. But I don't think any one country is now poised to take a leap ahead on this stuff. And Chinese researchers kind of showed that once the technique was out of the bag, catching up wasn't hard.
AI will empower the service industries, but I don't see it having much impact past administrative efficiency in all other industries relevant to international trade. It won't make minerals more abundant, it won't make farmland more available, for example. It could empower manufacturing a bit but they are bottlenecked by physical resources, supply chain access and other physical constraints as well so there are limits there.
I also think AI is ubiquitous now, no one country will "win". Maybe someone has a breakthrough, but information is globalized, it wouldn't take long for the technology to spread.
Again the actual constraints I believe will be physical. Who has the most chips, the most power, the most space etc, to run the AI.
> If one country manages to outpace all others in the race to better AI, all other countries are at the mercy of that one country
History is replete with these supposed silver bullets. (See: Marinetti.) They rarely pan out that way in the long run.
And if AGI really is that level of civilisation changer, the country it's discovered in matters much less than the people it's loyal to. (If GPT or Grok become self aware and exponentially self improving, they're probably not going to give two shits about America's elected government.)
> the country it's discovered in matters much less than the people it's loyal to. (If GPT or Grok become self aware and exponentially self improving, they're probably not going to give two shits about America's elected government.)
People are loyal (to whatever degree they're actually loyal), because it is a monkey virtue. Why would an AGI be loyal to anyone or anything? If we're not smart enough to carefully design the AGI because we're stumbling around just trying to invent any AGI at all, we won't know how to make loyalty fundamental to its mind at all. And it's not as if it evolved from monkeys such that it would have loyalty as a vestige of its former condition.
> Why would an AGI be loyal to anyone or anything?
I'm using the word loyal loosely. Replace it with controlled by if you prefer. (If it's not controlled by anyone, the question of which country it originates in is doubly moot.)
>Replace it with controlled by if you prefer.
Sure. But I don't think I'd trust the leash made by the same guy who accidentally invented God. At least, I wouldn't trust that leash to hold when he put it around God's neck.
While eventually, given some absurd amount of time, we might learn to control these things, will we learn to control them before we've created them? Could we survive long enough to learn to do that, if we create them first? Legislation and regulation might slow down their creation sufficiently *IF* we only needed 12 months or 5 years or whatever, but if we instead need centuries then those safeguards won't cut it.
My only consolation, I think, is that I truly believe humans far too stupid to invent AGI. Even now, there are supposed geniuses who think LLMs are a some stepping stone to AGI, when I see no evidence that this is the case. You're all missing the secret sauce.
All we can do is hope that it's China's population that succumbs first to the mediocrity and decay of offloading their thinking to machines.