One thing I don’t understand about this viewpoint (which I understand isn’t your own): why does one benefit so tremendously from getting there a month before competitors? I’m sure having a month of superintelligence with no competition would be lucrative, but do they think achieving superintelligence first will impede competitors from also achieving it a month later?
A week of superintelligence should be enough to take over the world, or at least sabotage your competitors. And even if someone else gets there a week later, they'll be permanently one week behind the curve (until the AI hits some physical limit, I suppose).
But that's all just sci-fi worldbuilding.
>they'll be permanently one week behind the curve
What if the competitor's architecture is able to produce tokens twice as fast. What if the competitor secures a 1 month exclusivity deal on Nvidia's next generation?
It's a tenet of the eschatology from the singularity ideology that was developed on online forums over the last few decades.
The viewpoint is baked into those assumptions and boils down to the power of exponentials and poor application of game theory.
A month with a superintelligence at your hands could be quite impactful, especially if you're willing to break the law / normal operating decorum in the pursuit of protecting what you have. A superintelligence, if wielded so, could destroy your competitors in a great many ways, including the relatively-benign solution of outcompeting them, to exploiting them and tearing them apart from the inside.
A genuine superintelligence is a very, very scary thing to have under the control of one person or organisation.
If I interpret "a machine superintelligence" as "a classroom of 300IQ humans," I'm not really sure how this is true? You still have material and energy constraints, you can't think your way out of those.
For the concrete problem we're discussing, you can hack your competitors out of existence, replace all of your knowledge workers to shed costs, hyperoptimise your logistics, etc. It's not just intelligence, it's speed and scale.
Bostrom's Superintelligence (2014) is a bit of a dreary read, and I didn't finish it, but it pulls no punches about the leverage that a superintelligence might have in our highly-connected world.
> For the concrete problem we're discussing, you can hack your competitors out of existence, replace all of your knowledge workers to shed costs, hyperoptimise your logistics, etc. It's not just intelligence, it's speed and scale.
For the concrete problem we're discussing, that hypothetical belongs in a Marvel movie, not reality. In the real world, you can't 'hack your competitors out of existence', and you'll be going to prison very quickly for trying this sort of thing.
I did say
> especially if you're willing to break the law / normal operating decorum
in my original post. If you have a superintelligence, you have something that can find and take advantage of every exploitation vector in parallel - technical, social, bureaucratic - and use that to destroy a company from the inside. A superintelligence that is subservient to its operator is an informational superweapon.
I agree that this sounds fanciful, but you can see what existing cyberattacks can do to organisations; it does not take that much imagination to gauge how much worse it could be when the process can be automated and scaled.
> A superintelligence that is subservient to its operator is an informational superweapon.
The five dollar wrench attack will put an end to that operator's use of an informational superweapon.
> I agree that this sounds fanciful, but you can see what existing cyberattacks can do to organisations
What can it do? Generally, a minor disruption to operations.
It consistently does a lot less than what law enforcement can do to you if you start messing with other rich peoples' money, while having enough of a presence to own a super-intelligence and a trillion-dollar data center.
Within a day - well before any legal or societal force could intervene - a superintelligence could make its way into every part of an organisation's internal network and tear it apart from the inside.
Conventional hackers are limited by the serial nature of their work - finding breaches, exploiting them, conducting further exploration of the network, trying not to get detected - in ways that a superintelligence would not be. The latter could be a hundred times as effective, a hundred times as fast, and a hundred times more parallel.
I agree that this is unlikely to happen because the societal bill would come due in time, but my point is that a month's lead is enough to do significant and lasting damage.
Assuming it can't super hack all computer systems and cripple competing SI incubation to at least increase its lead time indefinetly.
The assumption would be that in the lead time it has the super intelligence at least takes a small lead and undermines any paths a later arriving super intelligence could take to interfere with it's goals, which naturally includes stopping competing SIs from becoming more powerful in a way that could undermine it.
So assuming the super intelligence has goals and work towards them it will be initially trying to solidify its own power, iterating on that small lead, assuming it's the smartest super intelligence[1], should be enough to win. The scary part is that assuming no guardrails [2] it's going to be as ruthless as possible in achieving those goals. That does not necessarily mean it will appear ruthless in achieving those goals, just as ruthless as it judges optimal.
1. Which being so smart one of it's chores would have been reinvestment in making itself smarter than competition and being smarter than its makers has a good chance of actuating those self-improvements.
2. In the internal balancing of goals sense not the don't feed the mogwai after midnight sense.