That does not work without treacherous locked-down hardware. The marketing by Google et al is leaving out that fact to privacy-wash what is ultimately a push for digital authoritarianism.

Think about it - the claim is that those systems can prove aspects of someone's identity (eg age), without the site where the proof is used obtaining any knowledge about the individual and without the proof provider knowing where the proof is used. If all of these things are true while users are running software they can control, then it's trivial for an activist to set up a proxy that takes requests for proofs from other users and generates proofs based on the activist's identity - with no downside for the activist, since this can never be traced back to them.

The only thing that could be done is for proof providers to limit the rate of proofs per identity so that multiple activists would be required to say provide access to Discord to all the kids who want it.

If I had my 'druthers, there would be a kind of physical vending machine installed at local city hall or whatever, which leverages physical controls and (dis-)economies of scale.

The trusted machine would test your ID (or sometimes accept cash) and dispense single-use tokens to help prove stuff. For example, to prove (A) you are a Real Human, or (B) Real and Over Age X, or (C) you Donated $Y On Some Charity To Show Skin In The Game.

That ATM-esque platform would be open-source and audited to try to limit what data the government could collect, using the same TPM that would make it secure in other ways. For example, perhaps it only exposes the sum total of times each ID was used at machine, but for the previous month only.

The black-market in resold tokens would be impaired (not wholly prevented, that's impossible) by factors like:

1. The difficulty of scaling the physical portion of the work of acquiring the tokens.

2. Suspicion, if someone is using the machine dozens of times per month—who needs that many social-media signups or whatever?

3. There's no way to test if a token has already been used, except to spend it. By making reseller fraud easy, it makes the black-market harder, unless a seller also creates a durable (investigate-able) reputation. I suppose people could watch the vending-machine being used, but that adds another hard-to-scale physical requirement.

> 2. Suspicion, if someone is using the machine dozens of times per month—who needs that many social-media signups or whatever?

Anyone who visits pornhub and doesn't want to open an account?

Yeah, introducing real world friction is seemingly one of the only ways of actually solving the problems of frictionless digital systems (apart from computational disenfranchisement, of course).

It might be a better idea to frame your idea in terms of online interactive proofs rather than offline bearer tokens. It's of course a lot less private/convenient to have to bring a phone or other cell-modem enabled device to the vending machine, especially for the average person who won't exercise good digital hygiene. Still, some sort of high-latency challenge-proof protocol is likely the way to go, because bearer tokens still seem too frictionless.

For example (3) could be mitigated with an intermediary marketplace that facilitated transactions with escrow. If tokens were worth say $2, then even just getting 10 at a time to sell could be worth it for the right kind of person. And personally I'd just get 10 tokens myself simply to avoid having to go back to the machine as much. In fact the optimal strategy for regular power users might be to get as many tokens as you think you might need to use (even if you have to pay for them), and then when they near expiration time you sell them to recoup your time/cost/whatever.

My concern with some "bring your phone and use it immediately" scheme is that someone could pierce the privacy by looking at a correlation between the time an account was mode or a pattern of network-traffic occurred, versus the time someone was using/near the vending machine.

Adding large and unpredictable amounts of latency makes that kind of correlation weaker and hopefully impractical.

>Think about it - the claim is that those systems can prove aspects of someone's identity (eg age), without the site where the proof is used obtaining any knowledge about the individual and without the proof provider knowing where the proof is used.

That is not nessisarially true. There are ZK setups where you can tell when a witness is reused, such as in linkable ring signatures.

Another simple example is blind signatures, you know each unblinded signature corresponds to a unique blind signature without knowing who blinded it.

The easy solution is the best one. Just don't collect the info. Any problems resulting from that need to be handled differently.

Proven to work and we wouldn't be dependent on untrustworthy identity providers.

I agree. It is possible, but that does not mean it should be done.

The thing is with such a ZK system you are still collecting and compiling all this data, it's just done by some sort of (government?) notary and there is a layer of anonymity between the notary and the verifier (which they can cooperate to undo).

The real political problem is the concentration of personal information in one place. The ZK system just allows that place (notary) to be separate from the verifier.

Sure, but making use of that introduces new problems.

Fundamentally it limits a person to one account/nym per site. This itself removes privacy. An individual should be able to have multiple Discord nyms, right?

Then if someone gets their one-account-per-site taken/used by someone else, now administrative processes are required to undo/override that.

Then furthermore it still doesn't prevent someone from selling access to all the sites they don't care about. A higher bar than an activist simply giving it away for free, but still.

>An individual should be able to have multiple Discord nyms, right?

Yeah, I think so. I mean this is like my 20th hacker news account. I am using my 5th discord account right now.

But at the same time it would be an interesting to see how anonymous yet sybil-proof social media would work out.

I get the feeling that it's already pretty easy to buy and sell fake IDs, so I don't think it would pan out in practice. I also had the same idea as you: if such a system were to exist, you could sell proofs for all the services you don't use.

Usually, these zero-knowlege proofs are backed by some sort of financial cost, not the bureaucratic cost of acquiring an ID. All of these "linkable" ZK proofs are aimed at money systems or voting systems.

In the blind-signature based money systems, a big problem used to be dealing with change; you had to go back and spend your unblinded signature at the signatory to get a new one. In a similar fashion, maybe you could make it so that users could produce a new ZK proof by invalidating an old one? So you could retire an old nym if you get banned, and create a new nym but you could only have one at a time? IDK if that is a reasonable tradeoff.