The risk around AGI isn't the AI itself but the social ramifications surrounding it. If A: the powerful hold the value that to live you must work (and specifically that work should be valued by the market) and B: AGI and robotics can do all work for us. Then the obvious implication is the powerful will deem those who by circumstance will not be able to find work also unworthy of obtaining the conditions of their life.

Everyone doesn't die because of AGI, they die because of the consequences of AGI in the context of market worship.

imho the biggest risk isn't some hypothetical world in which 0 jobs exist or in which skynet kills us all, it's the very real and very present world in which people delegate more and more mental tasks to machines, to the point of being mere interfaces between LLMs and other computer systems. Choosing your kids name with an LLM, choosing your next vacation destination with an LLM, writing your grandma's birthday car with an LLM, it's just so pathetic and sad

And yes you'll tell me books, calculators, computers, the web, were already enabling this to some extent, and I agree, but I see no reason to cheer for even more of that shit spreading into every nook and crannies of our daily lives.