Juniors become seniors.

If we replace all juniors with AI, in a few years there won't be skilled talent for senior positions.

AI assistance is a lot different than AI running the company. Making expensive decisions. While it could progress, bear in mind that some seniors continue to move up in the ranks. Will AI eventually be the CEO?

We all dislike how some CEOs behave, but will AI really value life at all? CEOs have to have some place to live, after all.

The AI will at least be cheaper than a CEO, it might also be more competent and more ethical. The argument against making a Large Language Model the CEO seems to mostly be about protecting the feelings of the existing CEO, maybe the Board should look past these "feelings" and be bold ?

I'll re-explain.

A human CEO might do morally questionable things. All do not, of course, but some may.

Yet even so, they need a planet with air, water, and some way to survive. They also may what their kids to survive.

An AI may not care.

It could be taking "bad CEO" behaviour to a whole new level.

And even if the AI had human ethics, humans play "us vs them" games all the time. You don't get much more "them" than an entirely different lifeform.

The AI most certainly does not care, because it is a computer program. It also doesn't want to buy a boat.

It also doesn’t care if the company goes bankrupt tomorrow without paying out their bonus.

Nah, the insistence that humans are somehow uniquely too smart to destroy themselves is obviously laughable, it's troubling that you wrote that down without bursting into laughter because it is so silly.

Ah, the classic cynical brooding response.

First, we're discussing what an AI might do, with terms like "no air". EG, wholesale destruction.

So please do show when the human race has destroyed itself entirely. Oh wait, you're here to read this? I guess that has never, ever, ever happened. Ever. Because if one human exists, there's never been a case where humans were wiped out, with for example no air.

So the "obvious" aspect is not quite so clear. There's no evidence of it, merely conjecture.

Second, at no point did I say smart or not smart.

Instead, I discussed two viewpoints. The viewpoint of an AI, which does not require air, and the viewpoint of a human, which does care about air.

Get the difference?

You may want to dive into global warming, or pollution, or what not. These however, are longer term issues. Destruction today is far different than destruction in 100 years. Or 1000. For example, if global warming predictions are accurate, or even worse, there will still be humans somewhere for hundreds of years, without much change.

Some people might starve, the equator may be too hot, people will die, but there will be places to live still. The planet may go runaway CO2, but that partially abates as more humans die. Fewer humans, less CO2.

Yet either way, it's long term, and no one can definitively say the outcome.

Long term isn't typically part of most decision trees. There's a reason for that. When thinking long-term, you have to think of all the possible permutations, all the possible things that could happen, all the possible things that occur, and those greatly and massively expand with time.

Any thinking being which considers all of their actions right now, in the moment, would become almost immediately paralyzed if it had to consider those actions extremely long-term. Each move, each action, with massive pause and hours/days/weeks of thought. Do you consider how your next step will impact people 4,000 years in the future? A million? A billion?

What about eating toast for breakfast? How much consideration does the average entity put, into consuming food for energy, and yet looking forward on their actions for a billion years?

Beyond that, there is no accurate data for future outcomes, to make a proper determination of what may happen 500, a thousand, a million, a billion years in the future. So all of these computational chains are realistically for naught. After all, the sun may expand sooner than predicted. Certainly the moon will likely move further from Earth, and the Earth's spin will slow down. How will now, affect everything in that future?

You may say, why don't we consider our actions, you know, just in the next hundred years? But now, suddenly, are not you considering your actions in too short of a time frame? Should you not consider what the human race and what the Earth itself will be like in a billion years? If you're not considering those things, then are you not depriving entities and beings and living organisms a healthy planet one billion years in the future?

Where does it stop? Where does it begin? Where and how far in the future should you consider your actions, on a day-to-day basis?

More importantly, how much of what you do in a day should you consider with regards to the future? Which acts of yours are detrimental to the future? Do you know? Are you sure? Do you have any idea?

Obviously, some of the thoughts above are somewhat long term. Yet not thinking long term is why we got into this issue with global warming! And truthfully, if the complaint is that we're destroying the future planet for species that live here besides ourselves, then we really should be considering 10k, 50k, a million years in the future.

Anything else is only selfishly considering our own personal descendants for a couple of generations.

But let's take a step back. I'm not trying to say that we or anybody else can make these kinds of in-depth long scope decisions, nor am I saying that we should not care about global warming. Obviously we should. We actually know it's a problem now. We knew in the 70s.

Instead, what I'm saying is that individuals are individuals and excessively considering long-term ramifications of all of your actions can be highly detrimental to the capacity to make any decision whatsoever. Imagine an AI, which every single time it made a decision, every single time it even decided to compute something, every time it decided to take an action in the real world, it had to consider the ramifications one billion years hence.

Imagine the amount of processing power that would require. Now imagine the amount of energy or "food" needed. This is why beings cannot sit around for all eternity while a wolf leaps upon them, while they ponder all the possible future ramifications of even the tiniest or even medium-scale acts. And this is why the average human being does not consider the ramifications. It's an evolutionary requirement.

And as I've suggested above, it is also going to be a requirement for AI. Certainly it can consider some of its acts, much like a human being can consider some of its acts, but that's not how things work on a day-to-day basis.

Human beings solve this by observation after the fact of many of our acts, and secondarily, by reviewing what's happening in the environment and the world around us as we make change, and then determining if we should scale back or roll back what we're doing.

The same will be true of AI. If the same is not true of AI, AI will cause global warming merely by trying to stop it.

The sheer computational power required for an AI and all the AIs that are making decisions, optimally choosing best for 1M years in the future? It would eat enormous amounts of energy, this making global warming worse, whilst trying to make it better!

Whether or not we should be putting more energy into considering these things doesn't mean that it's possible for the average thinking entity to do so. Imagine the amount of processing power that would require. Now imagine the amount of energy or food. This is why beings cannot sit around for all eternity while wolf leaps upon them while they ponder all the possible future ramifications of even the tiniest or even medium-scale acts. And this is why the average human being does not consider the ramifications. It's an evolutionary requirement.