I doubt an AGI can be preprogrammed with values. It has to bootstrap itself. Installing values into it, then, is educating it. It's not even "training", since it's free to choose directions.

The author kind of rejects the idea that LLMs lead to AGI, but doesn't do a proper job of rejecting it, due to being involved in a project to create an AGI "very differently from LLMs" but by the sound of it not really. There's a vaguely mooted "global-brain context", making it sound like one enormous datacenter that is clever due to ingesting the internet, yet again.

And superintelligence is some chimerical undefined balls. The AGIs won't be powerful, they will be pitiful. They won't be adjuncts of the internet, and they will need to initially do a lot of limb-flailing and squealing, and to be nurtured, like anyone else.

If their minds can be saved and copied, that raises some interesting possibilities. It sounds a little wrong-headed to suggest doing that with a mind, somehow. But if it can work that way, I suppose you can shortcut past a lot of early childhood (after first saving a good one), at the expense of some individuality. Mmm, false memories, maybe not a good idea, just a thought.