Don't be idiots. The FBI may say that whether or not they can get in:

1. If they can get in, now people - including high-value targets like journalists - will use bad security.

2. If the FBI (or another agency) has an unknown capability, the FBI must say they can't get in or reveal their capabilities to all adversaries, including to even higher-profile targets such as counter-intelligence targets. Saying nothing also risks revealing the capability.

3. Similarly if Apple helped them, Apple might insist that is not revealed. The same applies to any third party with the capability. (Also, less significantly, saying they can't get in puts more pressure on Apple and on creating backdoors, even if HN readers will see it the other way.)

Also, the target might think they are safe, which could be a tactical advantage. It also may exclude recovered data from rules of handling evidence, even if it's unusable in court. And at best they haven't got in yet - there may be an exploit to this OS version someday, and the FBI can try again then.

I would not recommend that one trust a secure enclave with full disk encryption (FDE). This is what you are doing when your password/PIN/fingerprint can't contain sufficient entropy to derive a secure encryption key.

The problem with low entropy security measures arises due to the fact that this low entropy is used to instruct the secure enclave (TEE) to release/use the actual high entropy key. So the key must be stored physically (eg. as voltage levels) somewhere in the device.

It's a similar story when the device is locked, on most computers the RAM isn't even encrypted so a locked computer is no major obstacle to an adversary. On devices where RAM is encrypted the encryption key is also stored somewhere - if only while the device is powered on.

RAM encryption doesn’t prevent DMA attacks and perofming a DMA attack is quite trivial as long as the machine is running. Secure enclaves do prevent those and they're a good solution. If implemented correctly, they have no downsides. I'm not referring to TPMs due to their inherent flaws; I’m talking about SoC crypto engines like those found in Apple’s M series or Intel's latest Panther Lake lineup. They prevent DMA attacks and side-channel vulnerabilities. True, I wouldn’t trust any secure enclave never to be breached – that’s an impossible promise to make even though it would require a nation-state level attack – but even this concern can be easily addressed by making the final encryption key depend on both software key derivation and the secret stored within the enclave.

[deleted]
[deleted]

I recommend reading the AES-XTS spec, in particular the “tweak”. Or for AES-GCM look at how IV works.

I also recommend looking up PUF and how modern systems use it in conjunction with user provided secrets to dervie keys - a password or fingerprint is one of many inputs into a kdf to get the final keys.

The high level idea is that the key that's being used for encryption is derived from a very well randomized and protected device-unique secret setup at manufacturing time. Your password/fingerprint/whatever are just adding a little extra entropy to that already cryptographically sound seed.

Tl;dr this is a well solved problem on modern security designs.

> I recommend reading the AES-XTS spec, in particular the “tweak”. Or for AES-GCM look at how IV works.

What does this have to with anything? Tweakable block ciphers or XTS which converts a block cipher to be tweakable operate with an actualized key - the entropy has long been turned into a key.

> Your password/fingerprint/whatever are just adding a little extra entropy to that already cryptographically sound seed.

Correct. The "cryptographically sound seed" however is stored inside the secure enclave for anyone with the capability to extract. Which is the issue I referenced.

And if what you add to the KDF is just a minuscule amount of entropy you may as well have added nothing at all - they perform the addition for the subset of users that actually use high entropy passwords and because it can't hurt. I don't think anyone adds fingerprint entropy though.

> The "cryptographically sound seed" however is stored inside the secure enclave for anyone with the capability to extract.

Sorry, I'm not sure I follow here. Is anyone believed to have the capability to extract keys from the SE?

The secure enclave (or any Root of Trust) do not allow direct access to keys, they keep the keys locked away internally and use them at your request to do crypto operations. You never get direct access to the keys. The keys used are protected by using IVs, tweaks, or similar as inputs during cryptographic operations so that the root keys can not be derived from the ciphertext, even if the plaintext is controlled by an attacker and they have access to both the plaintext and ciphertext.

Is your concern the secure enclave in an iPhone is deflatable, and in such a way as to allow key extraction of device unique seeds it protects?

Do you have any literature or references where this is known to have occurred?

Tone is sometimes hard in text, so I want to be clear, I'm legit asking this, not trying to argue. If there are any known attacks against Apple's SE that allow key extraction, would love to read up on them.

> Is your concern the secure enclave in an iPhone is deflatable, and in such a way as to allow key extraction of device unique seeds it protects?

This is a safe assumption to make as the secret bits are sitting in a static location known to anyone with the design documents. Actually getting to them may of course be very challenging.

> Do you have any literature or references where this is known to have occurred?

I'm not aware of any, which isn't surprising given the enormous resources Apple spent on this technology. Random researchers aren't very likely to succeed.

[deleted]