Companies self-regulating around safety is even worse than banks self-regulating before 2008. At least the investment banks at that point were public companies and had to do a ton of disclosures. OpenAI doesn't have to.

If we want AI "safety", whatever that means, we need regulators and enforcement. Without it, I'll assume it's decoration.

The whole hype about AI safety is to some extent a shrewd marketing ploy. It's the whole, friends holding back their buddy who is amped up and ready to start throwing punches, act.

That is not to say that Hinton, Sutskever and others aren't genuinely concerned about AI safety. But I doubt that is the reason why the big names are paying lots of random nobodies to pretend to care about AI safety, because frankly, I do not see how they can output anything of use in a possible AGI future.

Don’t worry, if Anthropic cracks AGI first, we’ll all be safe, somehow.

You can't have regulators and enforcement until you figure out what it means.

> You can't have regulators and enforcement until you figure out what it means.

This is patently false. To have regulations and regulators, all you need to have is concrete specifications of what outcomes you want to avoid and what exactly to check.

For example, are you using personally identifiable information to train your models? Oh you are? Well, you should not. And you should prove that you aren't by tracking provenance of your training data.

See? That's verifiable, actionable, and enforceable. The things that regulators track.

Also quite important: the role of a regulator is to review what and how to regulate.

Except that is not AI safety. You are regulating other concerns (privacy, data ownership) which is great! But it's not the topic at hand.

Unless you are cough partially defining AI safety as privacy and data ownership. Which is my point.

> Except that is not AI safety.

Your assumption is uninformed and patently wrong. One of the primary concerns of AI safety is surveillance.

You absolutely can because some negative aspects are already cropping up - services capitulating before AI training scraper bots, children being extorted by schoolmates for AI-"nudified" pictures, lawyers submitting AI-generated filings full of hallucinations... that is something that warrants urgent regulatory attention.

Actual customer support by humans being replaced by AI is also something that warrants at least investigations - if not for the protection of one of the last classes of low-skill employment, mismanagement of support has been a thing for so long, "submit your complaint on Hacker News" is a meme.

Totally. Please ask Trump to step in, sounds like a wonderful idea.