Glad to see someone is looking out for a forest, here. A diverse host of excuses have cropped up to explain away the anxiety AGI brings, and I totally understand why. Yet again, today we stare into the abyss.

  Sora 2 represents significant progress towards [AGI]. In keeping with OpenAI’s mission, it is important that humanity benefits from these models as they are developed.
This seems like a good time to remind ourselves of the original OpenAI charter: https://web.archive.org/web/20230714043611/https://openai.co...

I wonder how exactly they reconcile the quote above with "We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions"...

I am not for or against AGI, but why is there anxiety around it? Do people simply hear sales rhetoric and assume that it can exist and will be used in order to dominate their lives?

I'm not referencing sales rhetoric, I'm referencing scientific consensus. AGI will have the same kind of impact on our species as fire and electricity did. We stand at a crossroads between unimaginable success and enormous catastrophe...

Well, good luck with that, hopefully it will learn to spell blueberry.