If the scary AI is so inevitable, why do you feel such an overwhelming need to convince people about that? Surely you can just wait a bit, and they'll see for themselves.

By that reasoning, why even warn people about anything? Why do road construction crews put up signs saying "ROAD CLOSED AHEAD" when you can just drive on and see for yourself?

Indeed, why warn people about real things that exist in the world? That is EXACTLY the same as inciting fear about something imaginary (not even projected).

In your mind, dangers from AI are imaginary and not even projected, therefore, you don't see any reason to warn about them, because you don't think the dangers are real. You don't believe the road is actually closed up ahead, so you don't think it's necessary to post the sign.

In Scott's mind, dangers from AI are not a known fact, but are somewhere between highly probable and a near-certainty. In his mind, there are well-grounded justifications for believing that AI poses substantial future dangers to the public. Therefore he also believes he should inform people about this, and strives to convince skeptics, so that we might steer clear.

It's easy to understand why someone who believes what you believe about AI would of course not warn people about AI. It's also easy to understand why someone who believes what Scott believes about AI would want to warn people about AI. Your contention is with his confidence for being worried about AI, not his reason for wanting to warn people.

Gosh it's quite embarassing to have to spell it out, but you inserted the part about Scott's motivations. It can't be found in the text.

Neither can any specific discussion of what the dangers are and how we can steer clear. It all comes preplanted in your head. The only thing that Scott is playing on (as far as we can see) is your ingrained fear, by using an ominous headline, and a vague reference to something "scary" in the conclusion.

Of course there was no reason to "warn" you, you already believed in the scary future. Scott is just giving you fuel, which you seem to appreciate.

Is this the first essay of Scott's that you've read?

The first and the last. Doesn't come as a surprise that they're regularly added to HN, often multiple times and rarely ever get more than 5 upvotes.

>as far as we can see

If only there were a way to see more of Scott's thoughts on the subject of AI..

Sorry, there are many better things to read than Scott's thoughts.

[deleted]

Yeah! And if climate change is so inevitable, why do the people who want to prevent it from happening seem hell-bent on convincing people that climate change is real?

[deleted]

1. It's not inevitable. 2. Those that see AI as an existential risk don't generally think it's a guarantee, but if it's say a 5% chance then that's worth addressing/mitigating. 3. That's not what this article was even about.

Sounds like the burden is on you to explain either

  1. If you're not treating my claim as a black box, explain explicitly what is your model of what the article was about? Are you aware, for example of the last paragraph of the article? I think that WAS what the article was about. Do you have specific opinions on e.g. how I went wrong and where my model differs?
  2. If you are treating it as a black box, what's your default expectation based on the law of Nothing Ever Happens?
Just kidding, you don't need to explain anything. A"I" fearmongers should though.

The point of the article is that people are historically bad at predicting when exponential curves plateau, even if they're correct that there will be a plateau.

This does *not* imply the inevitability of AGI. It does not imply AGI is necessarily bad.

It does mean that "the capabilities of AI will eventually plateau" offers no meaningful predictive power or relevance to the overall AI discussion.