There really are only 3 options that don't involve human destruction:

1. AI becomes a highly protected technology, a totalitarian world government retains a monopoly on its powers and enforces use, and offers it to those with preexisting connections: permanent underclass outcome

2. Somehow the world agrees to stop building AI and keep tech in many fields at a permanent pre-2026 level: soft butlerian jihad

3. Futurama: somehow we get ASI and a magical balance of weirdness and dance of continual disruption keeps apocalypse in check and we accept a constant steady-state transformation without paperclipocalypse

This makes the assumption that AI will lead to the apocalypse. That's unfalsifiable, predicted about plenty of things in the past, and frankly annoying to keep seeing pop up.

Its like listening to Christians talking about the rapture.

The problem is that if someone is right about an existential disaster caused by AI, by the time they're proven right it would be too late.

Frontier AI models get smarter every year, humans but humans don't get any smarter year over year. If you don't believe that somehow AI will just suddenly stop getting better (which is as much a faith-based gamble as assuming some rapturous outcome for AI by default), then you'd have to assume that at some point AI will surpass human intelligence in all fields, and the keep going. In that case human minds and overall will will be onconsequential compared to that of AI.

Frontier AI models get evaluated for safety precisely to avert the "AI robot uprising causes an existential disaster" scenario. At the moment we are light years away from anything like that ever happening, and that's after we literally tried our best to LARP that very scenario into existence with things like moltbook and OpenClaw.

Scenario 2 makes the assumption that no technological development can happen without AI, which seems like a stretch to me. Honestly, the worst scenario i can think of is 40ish years of AI assisted development followed by a technological crash due to there being no competent engineers left to fix the slop.

I didn't say all technological development would be halted, just that tech "in many fields" would have to be stalled for safety (AI development, algorithm development that would reduce the cost of training models, etc)> Naturally if AI is considered an existential threat there would be a huge safety radius for things that would allow bad-actors to train AI models.

[deleted]

Cool story, bro!

[dead]

In other words, only one option.