Everything about this in my head screams "bad idea".
If you need to trust the encryption and trust the hardware itself, it may not be suitable for your environment/ threat model.
Everything about this in my head screams "bad idea".
If you need to trust the encryption and trust the hardware itself, it may not be suitable for your environment/ threat model.
It is a bad idea but not in the way you think. FHE hardware don't decrypt data on-chip. It's like using the Diffie-Hellman key exchange for general computation. The data and operations stay encrypted at any given moment while outside your client device.
The textbook example application of FHE is phone book search. The server "multiply" the whole phonebook database file with your encrypted query, and sends back the whole database file to you every time regardless of queries. When you decrypt the file with the key used to encrypt the query, the database is all corrupt and garbled except for the rows matching the query, thereby causing the search to have practically occurred. The only information that exists in the clear are query and the size of entire database.
Sounds fantastically energy-efficient, no? That's the problem with FHE, not risks of backdooring.
In FHE the hardware running it don't know the secrets. That's the point.
First you encrypt the data. Then you send it to hardware to compute, get result back and decrypt it.
But you leak all type of information and and the retrieve either leak even more data or you'll end up with transferring a god knows amount of data or your encryption is trivially broken or spend days/month/years to unencrypt.
I don't know how you got these ideas but when you crack it, do make sure to write a post about it. Can't wait for that writeup.
LWE estimator isn't a proxy for this?
Math literacy needs to become standard for computer scientists. These takes are so bad
Or reading papers on the subjects, and playing with implementing FHE search.
>If you need to trust the encryption and trust the hardware itself, it may not be suitable for your environment/ threat model.
Are we reading the same article? It's talking about homorphic encryption, ie. doing mathematical operations on already encrypted data, without being aware of its cleartext contents. It's not related to SGX or other trusted computing technologies.
In theory you only need to trust the hardware to be correct, since it doesn't have the decryption key the worst it can do is give you a wrong answer. In theory.
But can you trust the hardware encryption to not be backdoored, by design?
That's my point, this sounds like a way to create a backdoor for at-rest data.
By design, you don't trust it. You never hand out the keys so there's no secret to back door. The task is never unencrypted, at rest or otherwise.
You can if the manufacturer has a track record that refutes the notion, and especially if they have verifiable hardware matching publicly disclosed circuit designs. But this is Intel, with their track record, I wouldn't trust it even if the schematics were public. Intel ME not being disable-able by consumers, while being entirely omitted for certain classes of government buyers tells me everything I need to know.
> That's my point, this sounds like a way to create a backdoor for at-rest data.
I get the feeling honestly it seems more expensive and more effort to backdoor it..
Well yeah... You do the initial encryption yourself by whatever means you trust