No, they couldn't. Homomorphic encryption makes it possible for whoever holds the keys to the data to get certain kinds of processing done on it by someone who doesn't know what the data represents, and who won't know what the results represent.
It is very carefully constructed exactly to prevent what you're talking about: leaking any kind of information about the data to someone who doesn't already know what the data is.
The problem is that nobody outside of the people enforcing this would know what that "processing" is looking for either. Is it going to look for illegal content, political activists of women seeking an abortion?
You can design a system where FHE does the analysis, and then the result is available to the 3rd party as well. Nothing in FHE prevents you from doing that.
Do you mean because you can make the result a yes/no, and then brute-force it with a plaintext attack (encrypting "yes", encrypting "no", and seeing which it is)? Or is there some technique that'd scale to larger output sizes?
Sure, if you have the private keys you can publish the result to whomever you want. But you don't need and wouldn't benefit from FHE in any way in this case.
You would benefit from FHE: the users would know that data never leaves the device, the inference is done locally, and only the result is shared.
I mean, I do not have a link to a paper with a system like that, but I think a combination of FHE and enclave of sorts can be good for such purpose (leaving aside potential performance issues with FHE).
If the data is encrypted with my key, no one else can access it or do anything else with it. Period - there is nothing more to talk about (assuming that the encryption scheme is secure, of course). No one can extract anything from this data unless they have my private key.
FHE, formally, is simply a scheme that has the following formal property:
FHE allows me to securely use someone else's hardware to run my inference on my data and be confident that I am the only one who knows the result. If the data is on my hardware, and I don't want it to leave my hardware, then FHE is completely useless for me.What you actually want is something like trusted computing. The government decides what analysis to run, it sends it to my hardware, my hardware runs that analysis on my decrypted data, and sends the result to the government, in such a way that the government can be certain that the algorithm was followed exactly. Of course, you need some assurances even here, such that the government doesn't just ask for the plaintext data itself - there have to be some limits to what they can run.