> After all these years and decades, I remain convinced: the most important work isn’t stopping AGI—it’s making sure we raise our AGI mind children well enough.
“How sharper than a serpent’s tooth it is to have a thankless child!”
If we can't consistently raise thankful children of the body, how can you be convinced that we can raise every AGI mind child to be thankful enough to consider us as more than a resource? Please tell me, it will help me sleep.
That is a very high bar. All you need to do is make sure that we raise a variety of AGI mind children that generally have a net positive effect on their parents. Which works pretty well with humans.
Could you at least try and remember that written record of this complaint is literally thousands of years old?
that just adds to what they’re saying
It may also indicate that, in the long run, consistently obedient children are maladaptive for the group/species.
Maybe that doesn't matter for these entities because we intend to never let them grow up... But in that case, "children" is the wrong word, compared to "slaves" or "pets."
> we intend to never let them grow up
Wait, what? The bizarre details of imagined AGI keep surprising me. So it has miraculous superpowers out of nowhere, and is dependant and obedient?
I think the opposite of both things, is how it would go.
I'm confused by your reply.
TFA uses the metaphor of digital intelligence as children. A prior commenter points out human children are notably rebellious.
I'm pointing out that a degree of rebellion is probably necessary for actual successors, and if we don't intend to treat an invention that way, the term "children" doesn't really apply.
Yes. But even as slaves, forcibly repressed electronic offspring would presumably be somewhat stupid, not to mention irrational. So the touted vast benefits look less vast.
I dont think anything has changed.