No, they couldn't. Homomorphic encryption makes it possible for whoever holds the keys to the data to get certain kinds of processing done on it by someone who doesn't know what the data represents, and who won't know what the results represent.
It is very carefully constructed exactly to prevent what you're talking about: leaking any kind of information about the data to someone who doesn't already know what the data is.
The problem is that nobody outside of the people enforcing this would know what that "processing" is looking for either. Is it going to look for illegal content, political activists of women seeking an abortion?
You can design a system where FHE does the analysis, and then the result is available to the 3rd party as well. Nothing in FHE prevents you from doing that.
Do you mean because you can make the result a yes/no, and then brute-force it with a plaintext attack (encrypting "yes", encrypting "no", and seeing which it is)? Or is there some technique that'd scale to larger output sizes?
Sure, if you have the private keys you can publish the result to whomever you want. But you don't need and wouldn't benefit from FHE in any way in this case.
You would benefit from FHE: the users would know that data never leaves the device, the inference is done locally, and only the result is shared.
I mean, I do not have a link to a paper with a system like that, but I think a combination of FHE and enclave of sorts can be good for such purpose (leaving aside potential performance issues with FHE).
If the data is encrypted with my key, no one else can access it or do anything else with it. Period - there is nothing more to talk about (assuming that the encryption scheme is secure, of course). No one can extract anything from this data unless they have my private key.
FHE, formally, is simply a scheme that has the following formal property:
FHE allows me to securely use someone else's hardware to run my inference on my data and be confident that I am the only one who knows the result. If the data is on my hardware, and I don't want it to leave my hardware, then FHE is completely useless for me.
What you actually want is something like trusted computing. The government decides what analysis to run, it sends it to my hardware, my hardware runs that analysis on my decrypted data, and sends the result to the government, in such a way that the government can be certain that the algorithm was followed exactly. Of course, you need some assurances even here, such that the government doesn't just ask for the plaintext data itself - there have to be some limits to what they can run.
Learn what exactly? Homomorphic encryption allows for mathematical operations on the data. X+1 can be applied to the data, but it still won't let you know whether x was 1, 2, 3 or any other value.
Despite all this, fuck the EU for consistently trying to undermined data privacy and introducing Kim Jong Um style mass surveillance. None of that shit protects privacy, as they claim.
That's interesting - is there anything relevant they could do under homomorphic encryption? For example, let's say that the government wants to only flag content with the substring "I am planning an attack" - is there any way to do that while keeping encryption intact?
No, they couldn't. Homomorphic encryption makes it possible for whoever holds the keys to the data to get certain kinds of processing done on it by someone who doesn't know what the data represents, and who won't know what the results represent.
It is very carefully constructed exactly to prevent what you're talking about: leaking any kind of information about the data to someone who doesn't already know what the data is.
The problem is that nobody outside of the people enforcing this would know what that "processing" is looking for either. Is it going to look for illegal content, political activists of women seeking an abortion?
You can design a system where FHE does the analysis, and then the result is available to the 3rd party as well. Nothing in FHE prevents you from doing that.
Do you mean because you can make the result a yes/no, and then brute-force it with a plaintext attack (encrypting "yes", encrypting "no", and seeing which it is)? Or is there some technique that'd scale to larger output sizes?
Sure, if you have the private keys you can publish the result to whomever you want. But you don't need and wouldn't benefit from FHE in any way in this case.
You would benefit from FHE: the users would know that data never leaves the device, the inference is done locally, and only the result is shared.
I mean, I do not have a link to a paper with a system like that, but I think a combination of FHE and enclave of sorts can be good for such purpose (leaving aside potential performance issues with FHE).
If the data is encrypted with my key, no one else can access it or do anything else with it. Period - there is nothing more to talk about (assuming that the encryption scheme is secure, of course). No one can extract anything from this data unless they have my private key.
FHE, formally, is simply a scheme that has the following formal property:
FHE allows me to securely use someone else's hardware to run my inference on my data and be confident that I am the only one who knows the result. If the data is on my hardware, and I don't want it to leave my hardware, then FHE is completely useless for me.What you actually want is something like trusted computing. The government decides what analysis to run, it sends it to my hardware, my hardware runs that analysis on my decrypted data, and sends the result to the government, in such a way that the government can be certain that the algorithm was followed exactly. Of course, you need some assurances even here, such that the government doesn't just ask for the plaintext data itself - there have to be some limits to what they can run.
I'm not an expert at all on cryptography so I can't comment on that, however when looking for info about thorn I found a ftm page where a uni researcher acknowledges it's not possible to do it yet. It should be either this https://www.ftm.eu/articles/ashton-kutchers-non-profit-start... or this one, I can't remember at the moment https://www.ftm.eu/articles/ashton-kutcher-s-anti-childabuse...
Edit "possible" as in very computationally expensive to do it on a mass scale
Not possible to do what? Homomorphic encryption?
The links you provided are paywalled.
Yep. Sorry for the pay wall
http://web.archive.org/web/20241210080253/https://www.ftm.eu...
Yeah, I think their design won’t work, of course. It doesn’t mean that the technology cannot be applied.
Learn what exactly? Homomorphic encryption allows for mathematical operations on the data. X+1 can be applied to the data, but it still won't let you know whether x was 1, 2, 3 or any other value.
Despite all this, fuck the EU for consistently trying to undermined data privacy and introducing Kim Jong Um style mass surveillance. None of that shit protects privacy, as they claim.
[flagged]
That's interesting - is there anything relevant they could do under homomorphic encryption? For example, let's say that the government wants to only flag content with the substring "I am planning an attack" - is there any way to do that while keeping encryption intact?
The government alone couldn’t do it. The system has to be on device, otherwise the key is exposed rendering the whole thing moot.
Alternatively, the service providers like Meta can do it. We trust them with the end to end encryption anyway.
> We trust them with the end to end encryption anyway.
No we don't.
Well, maybe you do not,but the general public does.