I'd really like to see improvements like these: - Some technical proof that data is never read by open ai. - Proof that no logs of my data or derived data is saved. etc...
I'd really like to see improvements like these: - Some technical proof that data is never read by open ai. - Proof that no logs of my data or derived data is saved. etc...
I don't think this is technically possible without something like homomorphic encryption, which poses too large of a runtime cost for usage in LLMs
They don't even try to proof it another way.