> You can't have regulators and enforcement until you figure out what it means.
This is patently false. To have regulations and regulators, all you need to have is concrete specifications of what outcomes you want to avoid and what exactly to check.
For example, are you using personally identifiable information to train your models? Oh you are? Well, you should not. And you should prove that you aren't by tracking provenance of your training data.
See? That's verifiable, actionable, and enforceable. The things that regulators track.
Also quite important: the role of a regulator is to review what and how to regulate.
Except that is not AI safety. You are regulating other concerns (privacy, data ownership) which is great! But it's not the topic at hand.
Unless you are cough partially defining AI safety as privacy and data ownership. Which is my point.
> Except that is not AI safety.
Your assumption is uninformed and patently wrong. One of the primary concerns of AI safety is surveillance.