I was the author for the practitioners implementation section for the IEEE 7010 standard for assessing human impact from AI software

https://standards.ieee.org/ieee/7010/7718/

I also worked closely with Jack Clark at OpenAI before he disappeared on all these issues as CTO back in 2018

There are literally zero “AI labs” that have ever cared about “safety”

none of them have ever done anything tangible with any kind of independent auditable third-party way that has some defined reference baseline for what is safe and what is not, how to evaluate it, or a practitioners guidance for how to determine what it is and what is not safe as a designer.

They follow the same rules as every other technology platform: do as much as you can legally get away with no more no less

I say this as somebody who’s been actively involved in the AI “safety” debate for a long time now at least since 2013

The concept itself doesn’t even make sense if you fully understand the intersectional scope of technology and society

Societies demands are the things that are unsafe not the technologies themselves

Just like Bertrand Russell said “as long as war exists all technologies will be utilized for it” - you can replace “war” for anything that you think is unsafe

Can you elaborate this part please?

> The concept itself doesn’t even make sense if you fully understand the intersectional scope of technology and society Societies demands are the things that are unsafe not the technologies themselves

Where can I learn more about it?

Go back to the fundamentals and read society of mind from Marvin Minsky or anything cybernetics from Norbert Wiener

if would be super helpful if you could give the elevator pitch version of what a safe AI is.

The only “safe AI” is one that comes out of a “safe set of data”

so what would a “safe set of data” actually have to look like

Well it would have to not look like the majority of data that we produce now which has latent embeddings (primarily from the common crawl database ) of racism, lying, competition, destruction domination

I don’t believe humans are actually capable of making such data because our entire structure of society is based on racism competition and domination

> has latent embeddings (primarily from the common crawl database ) of racism, lying, competition, destruction domination

but safety has a wider scope than "racism, lying, competition, destruction domination" like always requiring eye protection when asked about making lemonaide.

> I don’t believe humans are actually capable of making such data because our entire structure of society is based on racism competition and domination

So this debate that's been going on since 2013 is over because it's impossible to make an AI safe since the data is unsafe? That would make sense but if it was a data problem it seems like that conclusion could have been reached a long time ago.

Indeed that conclusion was reached a long time ago but technologists literally don’t care because they’re just trying to get paid

And literally everybody who has been trying to warn about it is beaten down publicly as a radical or whatever