For all the advancement in machine learning that's happened in just the decade I've been doing it, this whole AGI debate's been remarkably stagnant, with the same factions making essentially the same handwavey arguments. "Superintelligence is inherently impossible to predict and control and might act like a corporation and therefore kill us all". "No, intelligence could correlate with value systems we find familiar and palatable and therefore it'll turn out great"
Meanwhile people keep predicting this thing they clearly haven't had a meaningfully novel thought about since the early 2000s and that's generous given how much of those ideas are essentially distillations of 20th century sci-fi. What I've learned is that everyone thinking about this idea sucks at predicting the future and that I'm bored of hearing the pseudointellectual exercise that is debating sci-fi outcomes instead of doing the actual work of building useful tools or ethical policy. I'm sure many of the people involved do some of those things, but what gets aired out in public sounds like an incredibly repetitive argument about fanfiction
Hinton came out with a new idea recently. He's been a bit in the doomer camp but is now talking about a mother baby relationship where a super intelligent AI wants to look after us https://www.forbes.com/sites/ronschmelzer/2025/08/12/geoff-h...
I agree though that much of the debate suffers from the same problem as much philosophy that a group of people just talking about stuff doesn't progress much.
Historically much progress has been through experimenting with stuff. I'm encouraged that the LLMs so far seem quite easy going and not wanting to kill everyone.
I don't think (from a sci-fi reader's perspective) this is a particularly new idea, and with all due respect to Hinton as a machine learning pioneer, my disinterest in this topic is not going to be alleviated by saying some names that can claim expertise. Stories about ASI are essentially the same thing as stories about advanced alien civilizations or gods. They essentially act as a repository for hopes, fears, and generally expectations one has around the concept of being dominated and ruled by something more powerful than we can imagine defeating. Telling stories of these kinds can do a lot to examine the relationships we have to power, how we've come to expect it to behave, what it's done to us, and for us, but they're not newsworthy meaningful predictions of the future and never contain good advice for what one should actually do, so it's weird to keep treating them as such
It's hard for people to say "we don't know". And you don't get a lot of clicks on that either.