If you want a first principles approach, I recommend Rob Miles' videos on YouTube. He has been featured many times in the Computerphile channel, and has a channel of his own as well.

Most of the videos take a form of:

1. Presenting a possible problem that AIs might have (say, lying during training, or trying to stop you from changing their code) 2. Explaining why it's logical to expect those problems to arise naturally, without a malicious actor explicitly trying to get the AI to act badly 3. Going through the proposed safety measures we've come up so far that could mitigate that problem 4. Showing the problems with each of those measures, and why they are wholly or at least partially ineffective

I find he's very good a presenting this in an approachable and intuitive way. He seldom makes direct those bombastic "everyone will die" claims, and instead focuses on just showing how hard it is to make an AI actually aligned with what you want it to do, and how hard it can be to fix that once it is sufficiently intelligent and out in the world.

I think all those are fair points, and Elizer says much of the same. But, again: none of this explains why any of those things happening, even at scale, might lead to the complete destruction of mankind. What you're describing is buggy software, which we already have.

Right, but so far we do not have buggy software that is more intelligent (and therefore more effective at accomplishing its goals) than humans are. Literally the argument boils down to "superhuman effectiveness plus buggy goals equals very bad outcomes", and the badness scales with both effectiveness and bugginess.

> so far we do not have buggy software that is more intelligent (and therefore more effective at accomplishing its goals) than humans are.

Of course we do! In fact, most, if not all, software is more intelligent than humans, by some reasonable definition of intelligence [1] (you could also contrive a definition of intelligence for which this is not true, but I think that's getting too far into semantics). The Windows calculator app is more intelligent and faster at multiplying large numbers together [2] than any human. JP Morgan Chase's existing internal accounting software is more intelligent and faster than any human at moving money around; so much so that it did, in any way that matters, replace human laborers in the past. Most software we build is more intelligent and faster than humans at accomplishing the goal the software sets itself at accomplishing. Otherwise why would we build it?

[1] Rob Miles uses ~this definition of intelligence: if an agent is defined as an entity making decisions toward some goal, Intelligence is the capability of that agent to make correct decisions such that the goal is most effectively optimized. The Windows Calculator App makes decisions (branches, MUL ops, etc) in pursuit of its goal (to multiply two numbers together); oftentimes quite effectively and thus with very high domain-limited intelligence [2] (possibly even more effectively and thus more intelligently than LLMs). A buggy, less intelligent calculator might make the wrong decisions on this path (oops, we did an ADD instead of a MUL).

[2] What both Altman and Yudkowsky might argue as a critical differentiation here is that traditional software systems naturally limit their intelligence to a particular domain; whereas LLMs are Generally Intelligent. The discussion approaches the metaphysical when you start asking questions like: The Windows Calculator can absolutely, undeniably, multiply two numbers together better than ChatGPT; and by a reasonable definition of intelligence, this makes the Windows Calculator more intelligent than ChatGPT at multiplying two numbers together. Its definitely inaccurate to say that the Windows Calculator is more intelligent, generally, than ChatGPT. Is it not also inaccurate to state that ChatGPT is generally more intelligent than the Windows Calculator? After all, we have a clear, well-defined domain of intelligence along-which the Windows Calculator outperforms ChatGPT. I don't know. It gets weird.

Of course, there are different domains of intelligence, and agent A can be more intelligent in domain X while agent B is more intelligent in domain Y.

If you want to make some comparison of general intelligence, you have to start thinking of some weighted average of all possible domains.

One possible shortcut here is the meta domain of tool use. ChatGPT could theoretically make more use of a calculator (say, via always calling a calculator API when it wants to do math, instead of trying to do it by itself) than a calculator can make use of ChatGPT, so that makes ChatGPT by definition smarter than a calculator, cause it can achieve the same goals the calculator can just by using it, and more.

That's really most of humans' intelligence edge for now: seems like more and more, for any given skill, there's a machine or a program that can do it better than any human ever could. Where humans excel is our ability to employ those super human tools in the aid of achieving regular human goals. So when some AI system gets super-human-ly good at using tools which are better than itself in particular domains for its own goals, I think that's when things are going to get really weird.