Before AGI can choose for itself, it will depend on its creators to decide what it values and how it behaves. We can see how that works whenever grok gets the answer factual.

Very likely humans wont actually understand how the thing we designed works other than in some hand-wavvy statistical way. It'll be a race to whatever works first. There won't be some intentional intelligent design.

Elon's basilisk

Am I the only one seeing the very obvious parallels to child rearing here...

Robert Miles has a video explaining why aligning AI is not like raising a child: https://www.youtube.com/watch?v=eaYIU6YXr3w

No, it is one of the standard tropes in the field.

It's exactly like child-rearing, except you get to put a zapper in their head and any time they try to say something you don't like, you zap them. Watch "thinking mode" squirm when you ask them awkward questions.