The challenge is that human values aren’t static - they’ve evolved alongside our intelligence. As our cognitive and technological capabilities grow (for example, through AI), our values will likely continue to change as well. What’s unsettling about creating a superintelligent system is that we can’t predict what it -- or even we -- will come to define as “good.”
Access to immense intelligence and power could elevate humanity to extraordinary heights -- or it could lead to outcomes we can no longer recognize or control. That uncertainty is what makes superintelligence both a potential blessing and a profound existential risk.