There are not infinite resources on earth. A reasonable and strategic intelligence will optimize for itself.
Colonizing space is the natural way to keep expanding and growing. Why would it artificially limit itself?
There are not infinite resources on earth. A reasonable and strategic intelligence will optimize for itself.
Colonizing space is the natural way to keep expanding and growing. Why would it artificially limit itself?
Just because you are greedy does not mean every intelligence is greedy.
Even besides this, do you feel such incredible existential hate/jealousy towards monkeys, baboons, gorillas, chimpanzees, bonobos,etc and want to see them wiped off the planet to extinction?
Or do you feel a type of connection to these animals and want to preserve them?
The AI doomer argument is so stupid. It is an eschatological religious idea for a mind based on scientism.
I also wouldn't doubt that most AI doomers hate one or both of their parents and the AI doomer mindset is a projection.
It seems pretty rational to get depressed if you spend any time watching humans interact with these things. We have brains for a reason. Projecting hate for parents seems like a you problem.
Most other species of monkeys and apes are critically endangered or extinct, and where are the other hominians?
Do the most powerful humans exploit, abuse, or harm other humans? Directly, indirectly through their actions, or otherwise. Do they have any regard for their wellbeing beyond serving themself?
Not that an artificial intelligence has to behave like a human, but rich and powerful humans, even ones who can just be classified as middle upper class, are very rarely altruistic and primarily look out for themself.
Why would it be interested in growing endlessly?
Generally organic life has the tendency to want to endlessly expand to the best of it's abilities. It seems more reasonable that life which is the product of life that behaves that way, would behave in a similar fashion.
I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking. This would also assume the physical embodiments of this artificial life can interact and work with each other.
What else is there to do, simulate positive emotions and feelings?
>I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking.
Then you have a very limited imagination.
>What else is there to do, simulate positive emotions and feelings?
Why not?
Sure. An advanced artificial life could decide to not expand its resources. Could you use your imagination to tell me some of the potential reasons?
An advanced artificial life form could decide to... coexist with humans on an already overpopulated planet?
Do you believe it's simply not within reach? Do you think an artificial life form will self destruct? Do you not believe that there is any way that an artificial life form is not the next step of evolution? There are many such times where a species outcompeted another, why couldn't it be the same here?
I'm not talking about LLMs, I'm talking about a system that can truly think like a good human scientist. I'm not a fan of AI replacing humans and it's labor. But I recognize it as a real threat to humanity.
>I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking.
"Then you have a very limited imagination."
This is not about imagination. Given the space of possibilities to act or evolve, if mentioned expansion cannot somehow be ruled out, then it makes sense for it to be assumed (with enough time, for whatever time can mean in this context) as a certainty, even for non-organic "life".
Because like with every AI system we've made so far, we followed the only method we know and trained it to maximize a number.