But they can just plug an AI into a verified account.

I am less concerned here. If you plug in AI into your identity, I guess your identity is revoked. I see the problem though, that once a service notices you're an AI, there is no way to block you because we don't really know who you are, only that you're human.

So we need a mechanism that makes this identity verifiable, maybe you get a unique identifier from the identity service, so you can block the account. There is no mechanism to report you to say, the identity service (this is a bot), so you manage your own block list.

The risk here is fingerprinting, your id can be cross referenced across apps. Maybe here is where you implement a zk proof that you're who you say you are.

I don’t love the original idea because uploading identification is risky. You could just plug AI into a verified account but at least the vector is a single account instead of unbounded.

But then if the AI is detected that person can be permanently banned. No more AI. No new accounts.

So if someone compromises your identity they can unperson you? How will the AI be detected? By another AI?

"So if someone compromises your identity they can unperson you?"

You've identified a problem that unrelated systems also have. Like banks and identity theft. This solution isn't responsible for causing that problem.

"How will the AI be detected? By another AI?"

However a platform likes to. Let the best platform win.