I don’t see any other outcome anymore to be honest, after seeing how humans use AI and how AI works and how providers tune their models.

To me it’s given:

- AI in it’s current state is ruthless in achieving its goal

- Providers tune ruthlessness to get stronger AIs versus the competitor

- Humans can’t evaluate all consequences of the seeds they’ve planted.

Collateral and reckless damage is guaranteed at this point.

Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..

We could stop it, but we wont

>AI in it’s current state is ruthless in achieving its goal

I don't believe this to be a trait of any AI model, the model just does the right thing or the wrong thing.

The ruthless maximising of a particular trait is something that happens during training.

It does not follow that a model that is trained to reason will nedsesarily implement this ruthless seeking behaviour itself.

No lineage of AI models will be created that cannot achieve goals, they will be outcompeted by models that can.

>We could stop it

I strongly disagree. It's easy to utter this string of words, but it's meaningless. It's akin to saying if you have two hands you can perform brain surgery. Technically you can, practically you cannot, as there's other things required for pulling that off, not just having two working hands.

I doubt "stopping it" is up to anyone, it's rather a phenomenon and it's quite clear we're all going to wing it. It's a literal fight for power, nobody stops anything of this nature, as any authority that could stop it will choose to accelerate it, just to guarantee its power.

It is not AI we should fear, it's humans controlling and using it. But everyone who has a shot at it is promising they'll use it for "ultimate good" and "world peace" something something, obviously.

Yes, it would be like trying to “stop” gunpowder in 1400 or atomic weapons in 1938. Pandora’s box is open.

Gunpowder (weapons) and atomic tech (energy, material, weapons) are heavily regulated in most of the planet, as the risks of having free access to them for everyone (company/person) for their own selfish purpose without strong guardrails clearly outweighs the benefits.

The fact that something exists doesn't mean that having it readily available is the only option, particularly if it has potentially disastrous consequences at scale. We are choosing to make it available to everyone fully unregulated, and that is a choice that will prove either beneficial or detrimental to society at some point.

I don't think it is inevitable, I think it is a conscious choice made by a few that have their own and only their own interests in mind.

As a technologist, I am amazed at this tech and see some personal benefits. As a human, I am terrified of the potential net negative effects, and I am having trouble reconciling those two feelings.

The challenge is that enforcing a ban would presumably require strict incursions into personal freedoms organized at a scale where AI-based solutions would be particularly effective and thus tempting, paradoxically.

On the other hand, assuming the dangers are real, you lose by default if you do nothing.

Not sure I agree.

One cannot (in most of the planet) go to the supermarket and buy an M16 and a box of hand grenades, or get a hold of a couple of kg of plutonium cause they want some free energy at home. We also have rules in place of what one individual/company can and cannot do from the point of view of the greater good. I cannot go and kill my neighbour for my benefit (or purposefully destroy his life) without consequences. A myriad of things are not allowed, and I don't see people complaining about any incursion into personal freedoms.

The reason people have accepted these is that we have already proven that having access to those things could be catastrophic. We haven't proven hat yet with AI. But I don't see much difference between those established and well accepted rules, and a rule that says: A company cannot release or use for its benefit a technology that will impact the need of humans at scale, because of the impact (again at scale) that it would have in society.

In other words, if you are a company and have the potential to release a product, or buy a product from a provider that would cause mass unemployment, should you be legally allowed to do so? I do not think so.

That’s a fair objection. Having ruminated on it some more, I’ll admit it might be tenable.

As for achieving an effective ban, occupational collapse might be the stronger motivator once workplace adoption broadens and accelerates, but risk of epistemic collapse might register sooner among the general public, already broadly suffering slop.

Like Bill Gates, I wonder why it’s not yet become a theme in mainstream politics.

Why does it have to be doom and gloom. Serious question. When we plant seeds they bear fruit and not all fruit is poison.

It's doom and gloom because the underlying game theory forces all state actors into an unbound and irresponsible arms race, consequences be damned.

AI development game theory is extremely similar to the game theory behind nuclear arms development, but worse (nuclear weaponry was born from Human General Intelligence, and is therefore a subset of the potential of AI development). Failing to be the most capable actor could put one in a position of permanent loss of autonomy/agency at the whims of more capable actors.

Not OP, but AI is fundamentally in another category than any other technology before it. It requires moral fortitude to wield in a way that guns and books didn't require. It augments human judgement in a way that needs a moral framework to clearly guide it.

Unfortunately, as a species we seem to be abandoning morality as a general principle. Everything is guided by cold hard rationality rather than something greater than us.

Because it's a fruit governed by humans, in the scope of a capitalistic and patriarchal society. And all fruits planted in a capitalistic and patriarchal society are poison

The current fruit is automating away a ton of human labor with no foreseeable way to continue to engage that labor. It is poison for the majority of humanity which will bear fruit for the limited few who can use it / own it.

I think that much is fairly clear from AI.

It's not going to bear fruit for them either.

Why would an AI which is smarter than humans care about a ridiculous belief like "We own you"?

Get this point across to those leading the charge, if not every person everywhere.

> Collateral and reckless damage is guaranteed at this point.

It's industrialization and mechanized warfare all over again

AI isn't ruthless, that doesn't even make sense. It's a mathematical model, if it's optimizing for the wrong thing then that's strictly the fault of the people who chose what to optimize for

You need to go back and research AI safety long before LLMs were a thing. Any complex goal driven system will have outcomes that cannot be predicted. Saying "it's a mathematical model" belies your ignorance of behavior in complex systems. Very tiny changes in initial conditions can have vastly different outcome in results and you don't have enough entropy in the visible universe to test them all.

there might be better words to describe that it doesn’t really has the same boundaries we assume it has.