> Much like age verification

Age verification as a technical concept can be done in a privacy-preserving manner! Whether or not we want age verification is another debate, but let's stop making wrong technical claims about that: it doesn't help.

Really, how?

At some point someone will need to issue a key, which at some point will need to be verified against known good signatures.

These signatures will also need to be kept in case of lawsuirs/enforcement, so if somebody gets access they will know you visited that site

The trick is to define "privacy-preserving age verification" in an extremely narrow way that ignores any other privacy concerns.

For example, imagine you put the same private key into the 'secure element' of every single iphone. You use code signing so that key is only unlocked when the phone is running unmodified iOS with all security updates. You use encryption and remote attestation for the front-facing camera and face id depth sensor. You use NFC to read government-authenticated age and appearance data from biometric passport chips (or digital ID cards) and you store it on-device.

Then, when you want to access pornhub, they send an age challenge to your device, your device makes sure your face matches the stored passport, and if so it signs the challenge with the private key.

Pornhub gets an Apple-signed attestation of age - but because every phone signs with challenges with the same private key, Pornhub can't link it to a particular phone or identity document.

So in a very narrow sense, privacy is preserved.

You can't use someone else's ID, as it checks your face every time. You can't fool it with a photo of the person because of the depth sensor. You can't MITM/replay the camera/depth data because the link is encrypted. You can't substitute software that skips the check with a rooted phone because of the code signing. Security holes can be closed by just pushing a mandatory OS update.

Sure, it doesn't work on PCs. Doesn't work on Linux, or on unlocked/rooted phones. It hands users' government ID documents over to Google and Apple. It requires people to carry foreign-made, battery powered, network connected GPS trackers (with cameras, microphones and speech recognition) with them. And there are non-negotiable terms of service everyone must agree to. But if you define "privacy-preserving" to ignore all that stuff and only consider whether Pornhub learns your identity, it's privacy-preserving.

That key will get leaked. A key that has to go into every phone, even if done at the manufacturer and onto the TPM chip, will get out.

Also even if it doesn't get leaked directly, the security of TPM chips is not absolute. Secrets from them can theoretically be extracted given an attacker with sufficient means and motivation. Normally nothing that's on a typical TPM chip would warrant a project of that magnitude, but a widely used private key can change that equation.

Plus a TPM chip doesn't really have means to tell the phone isn't being lied to. You could swap out the actual phone camera hardware and sensors for a custom board that feeds the entire phone camera data of your choosing and it would be none-the-wiser.

> That key will get leaked.

Maybe? But biometric passports, chip-and-pin payment cards and SIM cards seem to do reasonably well. And Apple can always push out a mandatory software update that rotates the key, if they need to.

> You could swap out the actual phone camera hardware and sensors for a custom board that feeds the entire phone camera data of your choosing and it would be none-the-wiser.

Apple's 'TrueDepth' cameras are serialised and paired with the rest of the device. The touch ID sensors were before that too.

I don't know the precise details, but reports from people trying to repair devices independently of Apple are that the phone is very much the wiser.

e.g. https://support.apple.com/en-gb/120567 https://www.reddit.com/r/iphonehelp/comments/1dl38kq/iphone_...

> Apple's 'TrueDepth' cameras are serialised and paired with the rest of the device. The touch ID sensors were before that too.

That prevents trying to swap the module, but doesn't prevent swapping out the sensor on the module itself.

All so kids can't access PornHub?

Jesus Christ.

14 year old me ran into porn on the internet all the time. It didn't turn me into a serial killer.

Meanwhile we let kids have exposure to algorithms that pervert their sense of self worth, get them addicted to dopamine and gambling, and make them feel inferior to their peers.

We have the wrong priorities as a society.

And this bullshit is going to turn us into a completely tracked, monitored, controlled bunch of cattle.

We're building 1984 and we're happy about it.

Parental controls on device are a better solution that work today and don't carry a risk of data breach.

They would be a solution if almost all parents used them, but parents don't want to socially isolate their kids since a lot of "social" activity is now on social media. It's kind of a prisoner's dilemma.

There's not necessarily wrong. Despite the vapid and damaging nature of most popular online media, isolating a child from it might have even worse social consequences when their real-life peer groups discover that they're not on social media or that their parents have neutered their phone. Some kids would turn out fine after that. Others would be socially destroyed for life (maybe with the right therapy they could become well-adjusted, but high quality therapy is rare).

> They would be a solution if almost all parents used them

No, they are a solution for parents who want to use them, and that's all they should be. Their existence demonstrates that it's possible to handle this without regulation, other than the desire of some people to inflict their preferences onto other people's kids.

I should not have to surrender my anonymity because parents are too lazy to setup parental controls.

Parental controls can set browsers in "child mode" where the browser sends an "I am a child" header to the server and social networks etc. need to honour it. This has existed for twelve years already: https://blog.mozilla.org/netpolicy/2014/07/22/prefersafe-mak... . It can probably be amended with a more granular set of levels, but that would be the best way forward.

The problem of "parents are negligent" is also solved by existing laws which have fines for parents who are negligent towards their children, and governments absolutely love collecting fines, so all the incentives are properly aligned.

Parental controls are intentionally gimped. They do the bare minimum while providing more than enough wiggle room for a tech savvy teenager. To implement a robust parental control scheme you need network level filtration which isn't something the average parent will know anything about.

Are they a better solution? Yes

Do they work currently? Not really

Are they too complex for the avg joe to work out. Unfortunately yes. (Something about the smartest bears and the dumbest humans)

Joe can walk into an Apple store (or wherever they purchased the device) and ask them to enable parental controls on it. We have people whose job it is to service computers and phones, they have been around for more than half a century. I am pretty sure most Joes don't service their cars either, yet they keep them road legal by visiting trained mechanics.

As long as Joe has the right to vote, which is something more important and more complex, we cannot complain that parental control is too complex.

Ring cryptography does this - given a public key and a set of private keys you can attest that one of the keys signed it but not which one. This lets both Google and you generate a signature and say “this is attested”, without the person verifying it knowing _who_ signed it.

You likely need one other step beyond a plain ring signature, often called a linkable ring signature. If you use only a plain ring signature I could get one authenticated key and setup a site that gives away an unlimited number of access tokens with it, and you can't identify which key is doing so in order to kick it out.

A linkable ring signature lets you correlate multiple usage but only if they share a common 'context value'. Intelligent selection of the context value results in abusive use inevitably sharing a context so you can exclude or rate limit it, but honest use tends to not share a context so the privacy is preserved.

It should be possible with zero knowledge proofs.

The problem is that while you might be able to trust the crypto, the government won't trust you to do the crypto entirely by yourself. And this introduces avenues for deanonymisation. Moreover, collusion between the government and the entity making the age check can also theoretically deanonimize.

It's a complicated problem.

We continue to seek a technological solution to a parenting problem.

With cryptography. Look at e.g. Privacy Pass, there is an RFC about it.

Blind signatures would work, with a bit of effort.

You can prove your signature is from a key which is in a member of an acceptable set without revealing which one. These schemes can also prevent excessive reuse, e.g. by you also proving that some linked value is a hashlike function of your private key, the date, and the domain, so if you sign multiple times for the same site in the same day your uses are linked, so someone can't just toss up an oracle that gives endless authentications.

Such systems are deployed in production by privacy preserving cryptocurrencies as its the same problem: Prove you're spending a coin that exists without revealing information about which one, and prove that you're not spending it multiple times.

Less private but easier to implement is just simple blind signing. Site asks you to give them a signature of their domain name, your account name, and date. You blind the data using a random number, go to google and identify yourself (e.g. solve a CAPTCHA, check your mobile device, age verify, whatever) and ask them to sign the blinded value-- they rate limit you and give you a signature. You unblind and provide to the site. Now the site knows you passed the google rate limit but nothing else, but google never learns what site you authenticated to.

The blindsigning approach is kinda lame because it requires active communication with a third party that learns you're online and authenticating to stuff. So I think it's generally less preferred but the cryptography is hardly any more complicated than an ordinary digital signature.

All states/governments have basic records on their citizens and residents, including at least a name, dob, address, etc, at least for a passport, driver's license, if not an actual id card. Let's assume this is acceptable.

Then it's technically possible (and really not that difficult) for states to provide a service that issues zero-knowledge proofs of facts like "age > X".

> Let's assume this is acceptable.

(partly off-topic rant) One can argue this is a false premise fallacy. For most of the time states did not have this information about their citizens and the world progressed quite nicely. The only argument to know stuff about citizens that don't drive (increasing numbers) nor travel abroad (different problem altogether) is to tax them?

One of the foundational differences between humans and cattle was you cannot brand (https://en.wikipedia.org/wiki/Livestock_branding) humans. Not physically, because we do it digitally and I see a slippery slope.

https://ageverification.dev/

> Unlinkability is achieved by design through Zero-Knowledge Proof cryptography see the "Privacy by design" section below.

Divorcing technical detail from how it is used does little good for humanity.