What is wrong with that statement? Human minds tend to develop certain kinds of value systems. Spider minds tend to develop other value systems. Every example of a mind architecture we have tends to develop certain characteristics values.

There's no indication that an AGI mind will adopt human-like values. Nor that the smarter something gets, the more benevolent it is. The smartest humans built the atom bomb.

Not that human values are perfectly benevolent. We slaughter billions of animals per day.

If you take a look at the characteristics of LLMs today, I don't think we want to continue further. We're still unable to ensure the goals we want the system to have are there. Hallucinations are a perfect example. We want these systems to relay truthful information, but we've actually trained them to relay information that looks correct at first glance.

Thinking we won't make this mistake with AGI is ignorance.

You're attacking a strawman argument that isn't what I, or OP were saying.

The article follows with this:

> Mammals, which are more generally intelligent than reptiles or earthworms, also tend to have more compassion and warmth.

> There’s deep intertwining between intelligence and values

After reading your original comment again, I don't think you're even agreeing with the article? Just with that specific out of context snippet?