I hear this comment a lot and I don't get it. Let's say AGI exists but it costs $100/hr to operate and it has the intelligence of a good PhD student. Does that suddenly mean that the economy breaks down or will the goalposts shift to AGI being "economical" and that PhD level isn't good enough? I still haven't gotten a heard a clear definition of AGI which makes me think that it will break the world.

This is what Open AI themselves believe the risk is:

> By "defeat," I don't mean "subtly manipulate us" or "make us less informed" or something like that - I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained.

Linked from https://openai.com/index/planning-for-agi-and-beyond/

It won't break the world, but it's warranted that it will break the world of people doing labor and getting paid for it. And when you think of it, even being a mediocre (or even moronic) investor is practicing a form of labor, so not even capital ownership is safe in the long run. And yes, generational wealth is a thing but there are tides that slowly shift wealth from A to B (e.g. from USA to China). Have a machine smart enough with even a sliver of motivation (intrinsic or extrinsic) to get some wealth for itself, and just watch what happens...