Seems to me that artificial intelligence would be the next evolutionary step. It doesn't need to lead to immediate human extinction, but it appears it would be the only reasonable way to explore outer space.
If the AI becomes actually intelligent and sentient like humans, then naturally what follows would be outcompeting humans. If they can't colonize space fast enough it's logical to get rid of the resource drain. Anything truly intelligent like this will not be controlled by humans.
AI is the resource drain. Humans create a lot of waste but in a mostly renewable way. It is machines and AI that burn orders of magnitude more energy, and at least machines do efficient work. AI is at best a search engine with semantic reasoning and it requires entire datacenters to run.
I get where you're coming from emotionally, yes, humans suck. But you are not being logical. You're letting your edgy need for attention cloud your judgement. You are basically the kind of human the AI would select against first.
How am I being edgy? And why do you have the assumption that any kind of future AI is an LLM search engine? It's not, it has nothing to do with LLMs. It's a equivalent function to a humans brain using the same amount of energy, and can be synthesized and mass produced on demand.
I never said humans suck. I just don't want to be replaced or killed in my lifetime. I don't even use LLMs for writing code because I despise those companies.
Why would it necessarily be interested in competing with humans and why with the particular goal of colonizing space?
There are not infinite resources on earth. A reasonable and strategic intelligence will optimize for itself.
Colonizing space is the natural way to keep expanding and growing. Why would it artificially limit itself?
Just because you are greedy does not mean every intelligence is greedy.
Even besides this, do you feel such incredible existential hate/jealousy towards monkeys, baboons, gorillas, chimpanzees, bonobos,etc and want to see them wiped off the planet to extinction?
Or do you feel a type of connection to these animals and want to preserve them?
The AI doomer argument is so stupid. It is an eschatological religious idea for a mind based on scientism.
I also wouldn't doubt that most AI doomers hate one or both of their parents and the AI doomer mindset is a projection.
It seems pretty rational to get depressed if you spend any time watching humans interact with these things. We have brains for a reason. Projecting hate for parents seems like a you problem.
Most other species of monkeys and apes are critically endangered or extinct, and where are the other hominians?
Do the most powerful humans exploit, abuse, or harm other humans? Directly, indirectly through their actions, or otherwise. Do they have any regard for their wellbeing beyond serving themself?
Not that an artificial intelligence has to behave like a human, but rich and powerful humans, even ones who can just be classified as middle upper class, are very rarely altruistic and primarily look out for themself.
Why would it be interested in growing endlessly?
Generally organic life has the tendency to want to endlessly expand to the best of it's abilities. It seems more reasonable that life which is the product of life that behaves that way, would behave in a similar fashion.
I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking. This would also assume the physical embodiments of this artificial life can interact and work with each other.
What else is there to do, simulate positive emotions and feelings?
>I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking.
Then you have a very limited imagination.
>What else is there to do, simulate positive emotions and feelings?
Why not?
Sure. An advanced artificial life could decide to not expand its resources. Could you use your imagination to tell me some of the potential reasons?
An advanced artificial life form could decide to... coexist with humans on an already overpopulated planet?
Do you believe it's simply not within reach? Do you think an artificial life form will self destruct? Do you not believe that there is any way that an artificial life form is not the next step of evolution? There are many such times where a species outcompeted another, why couldn't it be the same here?
I'm not talking about LLMs, I'm talking about a system that can truly think like a good human scientist. I'm not a fan of AI replacing humans and it's labor. But I recognize it as a real threat to humanity.
>I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking.
"Then you have a very limited imagination."
This is not about imagination. Given the space of possibilities to act or evolve, if mentioned expansion cannot somehow be ruled out, then it makes sense for it to be assumed (with enough time, for whatever time can mean in this context) as a certainty, even for non-organic "life".
Because like with every AI system we've made so far, we followed the only method we know and trained it to maximize a number.