Still go to prison for not showing. So until devices have multiple pins for plausible deniability we are still screwed.

What’s so hard to make 2-3 pins and each to access different logged in apps and files.

If Apple/android was serious about it would implement it, but from my research seems to be someone that it’s against it, as it’s too good.

I don’t want to remove my Banking apps when I go travel or in “dangerous” places. If you re kidnapped you will be forced to send out all your money.

Absolutely every aspect of it?

What’s so hard about adding a feature that effectively makes a single-user device multi-user? Which needs the ability to have plausible deniability for the existence of those other users? Which means that significant amounts of otherwise usable space needs to be inaccessibly set aside for those others users on every device—to retain plausible deniability—despite an insignificant fraction of customers using such a feature?

What could be hard about that?

> despite an insignificant fraction of customers using such a feature?

Isn't that the exact same argument against Lockdown mode? The point isn't that the number of users is small it's that it can significantly help that small set of users, something that Apple clearly does care about.

Lockdown mode costs ~nothing for devices that don't have it enabled. GP is pointing out that the straightforward way to implement this feature would not have that same property.

Lockdown mode doesn’t require everyone else to lose large amounts of usable space on their own devices in order for you to have plausible deniability.

now I want to know what dirty laundry are their upper management hiding on their devices...

The 'extra users" method may not work in the face of a network investigation or typical file forensics.

Where CAs are concerned, not having the phone image 'cracked' still does not make it safe to use.

Android phones are multi-user, so if they can do it then Apple should be able to.

And how do you explain your 1TB phone that has 2GB of data, but only 700GB free?

[deleted]

The "fake" user/profile should work like a duress pin with addition of deniability. So as soon as you log in to the second profile all the space becomes free. Just by logging in you would delete the encryption key of the other profile. The actual metadata that show what is free or not were encrypted in the locked profile. Now gone.

Good idea, but this is why you image devices.

Sorry I explained it poorly and emphasized the wrong thing.

The way it would work is not active destruction of data just a different view of data that doesn’t include any metadata that is encrypted in second profile.

Data would get overwritten only if you actually start using the fallback profile and populating the "free" space because to that profile all the data blocks are simply unreserved and look like random data.

The profiles basically overlap on the device. If you would try to use them concurrently that would be catastrophic but that is intended because you know not to use the fallback profile, but that information is only in your head and doesn’t get left on the device to be discovered by forensic analysis.

Your main profile knows to avoid overwriting the fallback profile’s data but not the other way around.

But also the point is you can actually log in to the duress profile and use it normally and it wouldn’t look like destruction of evidence which is what current GrapheneOS’s duress pin does.

The main point is logging in to the fake profile does not do anything different from logging in to the main profile. If you image the whole thing and somehow completely bypass secure enclave (but let's assume you can't actually bruteforce the PIN because it's not feasible) then you enter the distress PIN in controlled environment and you look at what writes/reads it does and to where, even then you would not be able to tell you are in the fake profile. Nothing gets deleted eagerly, just the act of logging in is destructive to overlapping profiles. This is the only different thing in the main profile. It know which data belongs to fallback profile and will not allocate anything in those blocks. However it's possible to set up the device without fallback profile so you don't know if you are in the fallback profile or just on device without one set up.

Hopefully I explained it clearly. I haven't seen this idea anywhere else so I would be curious if someone smarter actually tried something like that already.

What you say makes sense, just like the true/veracrypt volume theory. I can't find the head post to my "that's why you image post" but what concerns me is differing profiles may have different network fingerprints. You may need to keep signal and bitlocker on both, EVERYTIME my desktop boots a cloud provider is contacted -- it's not very sanitary?

It"s a hard problem to properly set up even on the user end let alone the developer/engineer side but thank you.

The same way when you buy a brand new phone with 200GB of storage that only has 50GB free on it haha

System files officer ;)

"Idunno copper, I'm a journalist not a geek"

That is about one fiftieth of the work that needs to go into the feature the OP casually “why can’t they just”-ed.

This is called whataboutism. This particular feature aside, sometimes there are very good reasons not to throw the kitchen sink of features at users.

Truecrypt had that a decade+ ago.

Not sure if you know the history behind it, but look up Paul Le Roux

Also would recommend the book called The Mastermind by Evan Ratliff

imo Paul Le Roux has nothing to do with TrueCrypt

He wrote the code base that it is based on in combination with code he stole. The name is also based on an early name he chose for the software.

Whether he was involved in the organization and participated in it, is certainly up for debate, but it's not like he would admit it.

https://en.wikipedia.org/wiki/E4M

Maybe one PIN could cause the device to crash. Devices crash all the time. Maybe the storage is corrupted. It might have even been damaged when it was taken.

This could even be a developer feature accidentally left enabled.

It doesn't seem fundamentally different from a PC having multiple logins that are accessed from different passwords. Hasn't this been a solved problem for decades?

Apple's hardware business model incentivizes only supporting one user per device.

Android has supported multiple users per device for years now.

You can have a multiuser system but that doesn't solve this particular issue. If they log in to what you claim to be your primary account and see browser history that shows you went to msn.com 3 months ago, they aren't going to believe it's the primary account.

My browser history is cleared every time I close it.

It's actually annoying because every site wants to "remember" the browser information, and so I end up with hundreds of browsers "logged in". Or maybe my account was hacked and that's why there's hundreds of browsers logged in.

Multi-user has been solved for decades.

Multi-user that plausibly looks like single-user to three letter agencies?

Not even close.

Doesn't having standard multi-user functionality automatically create the plausible deniability? If they tried so hard to create an artificial plausible deniability that would be more suspicious than normal functionality that just gets used sometimes.

What needs to be plausibly denied is the existence of a second user account, because you're not going to be able to plausibly deny that the account belongs to you when it resides on the phone found in your pocket.

Android has work profiles, so that could be done in Android. iPhone still does not.

Police ask: give me pass for work profile. If you don’t: prison.

Android has work profiles

Never ever use your personal phone for work things, and vice versa. It's bad for you and bad for the company you work for in dozens of ways.

Even when I owned my own company, I had separate phones. There's just too much legal liability and chances for things to go wrong when you do that. I'm surprised any company with more than five employees would even allow it.

What's the risk? On Android, the company can remotely nuke the work profile. The work profile has its own file system and apps. You can turn it off when to don't want work notifications.

you're surprise corporations are cheap

iPhone and macOS are basically the same product technically. The reason iPhone is a single user product is UX decisions and business/product philosophy, not technical reasons.

While plausible deniability may be hard to develop, it’s not some particularly arcane thing. The primary reasons against it are the political balancing act Apple has to balance (remember San Bernardino and the trouble the US government tried to create for Apple?). Secondary reasons are cost to develop vs addressable market, but they did introduce Lockdown mode so it’s not unprecedented to improve the security for those particularly sensitive to such issues.

> iPhone and macOS are basically the same product technically

This seems hard to justify. They share a lot of code yes, but many many things are different (meaningfully so, from the perspective of both app developers and users)

You think iPhones aren’t multi-user for technical reasons? You sure it’s not to sell more phones and iPads? Should we ask Tim “buy your mom an iPhone” Cook?

> Still go to prison for not showing. So until devices have multiple pins for plausible deniability we are still screwed.

> What’s so hard to make 2-3 pins and each to access different logged in apps and files.

Besides the technical challenges, I think there's a pretty killer human challenge: it's going to be really hard for the user to create an alternate account that looks real to someone who's paying attention. Sure, you can probably fool some bored agent in customs line who knows nothing about you, but not a trained investigator who's focused on you and knows a lot about you.

But at that point it turns from "the person refused to unlock the device" to "we think the person has unlocked the device into a fake account".

That’s what plausible deniability. How can you even tell?

Doesn’t matter if the agent believes you. Only matters if the court jails you on a contempt charge.

Background agent in the decoy identity that periodically browses the web, retrieves email from a banal account etc.?

Even more complications for a “why can’t they just…”. It’s almost as if this kind of thing is difficult to do in practice.

> Background agent in the decoy identity that periodically browses the web, retrieves email from a banal account etc.?

No. Think about it for a second: you're a journalist being investigated to find your sources, and your phone says you mainly check sports scores and send innocuous emails to "grandma" in LLM-speak? It's not going to fool someone who's actually thinking.

Just use an account for “regular” stuff. And only use the “secret” account as needed.

It's more a policy problem than a phone problem. Apple could add as many pins as they want but until there are proper legal based privacy protections, law enforcement will still just be like "well how do we know you don't have a secret pin that unlocks 40TB of illegal content? Better disappear you just to be sure"

For as long as law enforcement treats protection of privacy as implicit guilt, the best a phone can really do is lock down and hope for the best.

Even if there was a phone that existed that perfectly protected your privacy and was impossible to crack or was easy to spoof content on, law enforcement would just move the goal post of guilt so that owning the phone itself is incriminating.

Edit: I wanna be clear that I'm not saying any phone based privacy protections are a waste of time. They're important. I'm saying that there is no perfect solution with the existing policy being enforced, which is "guilty until proven dead"

Hannah Natanson is not in prison though.

How does "go to prison for not showing" work when a lot of constitutions have a clause for a suspect not needing to participate in their own conviction / right to remain silent?

A detective can have a warrant to search someone's home or car, but that doesn't mean the owner needs to give them the key as far as I know.

It does mean that. You can't be forced to divulge information in your head, as that would be testimonial. But if there are papers, records, or other evidentiary materials that are e.g. locked in a safe you can be compelled to open it with a warrant, and refusal would be contempt.

They need to prove that those materials exist on the device first. You can't be held in contempt for a fishing expedition.

You need "probable cause to believe" which is not as strong as "prove" but yes, it can't be a pure fishing expedition.

FaceID and TouchID aren’t protected by that as I understand it.

That's correct, they are not. A complete failing of legislation and blatant disregard of the spirit of the 5th Amendment.

So do not have biometrics as device unlock if you are a journalist protecting sources.

They are considered to be more like keys to a safe than private knowledge. They also can't be changed if compromised. A sufficiently unguessable PIN or passphrase is better than biometrics.

I know it seems like an incredibly dubious claim but the "I forgot" defense actually works here.

It's not really that useful for a safe since they aren't _that_ difficult to open and, if you haven't committed a crime, it's probably better to open your safe for them than have them destroy it so you need a new one. For a mathematically impossible to break cipher though, very useful.

Assuming the rule of law is still functioning, there are multiple protections for journalists who refuse to divulge passwords in the USA. A journalist can challenge any such order in court and usually won't be detained during the process as long as they show up in court when required and haven't tried to destroy evidence.

Deceiving investigators by using an alternate password, or destroying evidence by using a duress code on the other hand is almost always a felony. It's a very bad idea for a journalist to do that, as long as the rule of law is intact.

I think it's pretty clear at this point that rule of law isn't functioning. Perhaps it never was. It was just rule of law theater.

They are willing to kill people and then justify it by calling them terrorists. Plausible deniability is pointless.

Uh, that escalated quickly.

Actually it's been escalating pretty steadily for 250 years

Fourth and Fifth amendments disagree

Sure but in the real world it can take months or years, Francis Rawls stayed 4 years in jail because he didn't want to unlock hard drives.

I don't think we're doing amendments any more

And if we are it will be a new one with a high number and it will be pure insanity

People are jailed for contempt of court for failing to provide passwords.

https://reason.com/2017/05/31/florida-man-jailed-180-days-fo...

Wow, so US judges are just making it up as they go along, huh? It's like every case is a different judgement with no consistent criterion.

>Doe vs. U.S. That case centered around whether the feds could force a suspect to sign consent forms permitting foreign banks to produce any account records that he may have. In Doe, the justices ruled that the government did have that power, since the forms did not require the defendant to confirm or deny the presence of the records.

Well, what if the defendant was innocent of that charge but guilty of or involved in an unrelated matter for which there was evidence in the account records?

There is no plausible deniability here, that's only relevant in a rule-of-law type of situation, but then you wouldn't need it as you can't be legally compelled to do that anyway. "We don't see any secret source communication on your work device = you entered the wrong pin = go think about what your behavior in jail"

Even if this worked (which would be massively expensive to implement) the misconfiguration possibilities are endless. It wouldn't be customer-centric to actually release this capability.

Better for the foreseeable future to have separate devices and separate accounts (i.e. not in the same iCloud family for instance)

“Plausible deniability” is a public relations concept. It doesn’t confer any actual legal protection.

It absolutely offers some legal protection. If it is implemented correctly, no legal framework for it is required. Government forces you to enter your password. You comply and enter "a" password. The device shows contents. You did what you were asked to do. If there is no way for the government to prove that you entered a decoy password that shows decoy contents, you are in the clear. Done correctly (in device and OPSEC) government can't prove you entered your decoy password so you can't be held in contempt. And that is the entire point. It is not like asking the government to give your "plausible deniability" rights. It is about not potentially incriminating yourself against people that abuse the system to force you to incriminate yourself.

> You comply and enter "a" password. The device shows contents. You did what you were asked to do.

No, you did something fake to avoid doing what you were asked to do.

> If there is no way for the government to prove that you entered a decoy password that shows decoy contents, you are in the clear.

But there are very effective ways to find hidden encrypted volumes on devices. And then you’ll be asked to decrypt those too, and then what?

This sort of thing is already table stakes for CSAM prosecutions, for example. Law enforcement can read the same blog posts and know as much about technology as you do. Especially if we are hypothesizing an advertised feature of a commercial OS!

>No, you did something fake to avoid doing what you were asked to do.

Yes, that is what plausible deniability is.

>But there are very effective ways to find hidden encrypted volumes on devices. And then you’ll be asked to decrypt those too, and then what?

I emphasized "done right". If existence of hidden encryption can be proven, then you don't have plausible deniability. Something has gone wrong.

My point was: OP claimed plausible deniability does not apply in legal cases which is a weird take. If you can have plausible deniability, then it can save you legally. This does not only apply to tech of course, but encryption was the subject here. In all cases though, if your situation is not "plausible" (due to broken tech, backdoors, poor OPSEC in tech, and / or damning other evidence in other cases as well) then you don't have plauisble deniability by definition.

Having ways of definitively detecting hidden encrypted volumes might be the norm today, might be impossible tomorrow. Then you will have plausible deniability and it will work legally as far as that piece of "evidence" is concerned.

[deleted]

Yep, you need an emergency mode that completely resets the phone to factory settings, maybe triggered with a decoy pin. Or a mode that physically destroys the chip storing the keys

I always wondered if this was the feature of TrueCrypt that made it such a big target. LUKS is fine, I guess, but TrueCrypt felt like actual secrecy.

You do not. We have this thing in our constitution called the 5th amendment. You cannot be forced to divulge the contents of your mind, including your pin or passwords. Case law supports this. For US citizens at least. Hopefully the constitution is still worth something.

https://www.bleepingcomputer.com/news/legal/man-who-refused-...

It took 4 years. What is your point?

That you don't want to be in jail for 4 years for not providing the key?

I personally don't want to say "oh but my liberty", in a jail cell. Whatever floats your boat though.

It is salient to point out the individual almost certainly had incriminating evidence on that system. There aren't really any cases of a person being /randomly/ detained and held in contempt. Especially not a fishing expedition by the DOJ against a journalist. And if you know the act of giving away the password has some upper limit on contempt jailing vs. assured evidence against you in some much higher consequence felony wouldn't you just stay quiet? No amendment or right is absolute, but this one is rather strong, especially if you haven't, you know, been doing any crimes.

That's in the fantasy world of constitution maximalists. In real world it doesn't work like that and you might still lose money/time/your sanity fighting a system who cares less and less about your rights

The case law on this specific topic is convincing. If you are ever in that situation it is usually going to be worth your time and money to assert the right and see it through. Case law supports this. The general maximum “penalty” is being held in contempt of court. And if the government is wrongly persecuting you, it is lose / lose if you divulge.

Do you think this is for fighting parking tickets? It is for journalists to not reveal their sources, whom might be at risk of severe consequences including death.

That's a whole lot more to loose than your money and time.

That's not what we're discussing here, you can't just say "I plead the fifth" and walk away if the people in charge decided you wouldn't walk away, no matter what's right or "legal"

Francis Rawls stayed 4 years in jail despite pleading the fifth all day long

That case also established 18 months as an upper limit. If you are in that situation it is usually better to simply jot divulge. Especially if there is incriminating evidence. Or you are a journalist being harassed by the DOJ. It can only bring you more pain. They will always find something.

Yeah well that's what I'm saying... "just plead the fifth" is nice on paper, in practice you're going to suffer for a long time.

> You cannot be forced to divulge the contents of your mind, including your pin or passwords.

Biometric data doesn’t need the password.

And good luck depending on the US constitution.

You're forgetting about the Constitution-Free Zone within 100 miles of all points of entry including international airports that covers essentially all of the 48.

This is a misunderstanding. That's the area in which the border patrol has jurisdiction to can conduct very limited searches of vehicles and operate checkpoints without individualized suspicion in order to enforce immigration law. It does not allow searches of electronic devices.

There is a separate border search exception at the point a person actually enters the country which does allow searches of electronic devices. US citizens entering the country may refuse to provide access without consequences beyond seizure of the device; non-citizens could face adverse immigration actions.

To be clear, I do think all detentions and searches without individualized suspicion should be considered violations of the 4th amendment, but the phrase "constitution-free zone" is so broad as to be misleading.

With ICE on the prowl, I’d have thought ‘Constitution Free Zone’ a fitting description of how they operate.

I am not. You can still assert your rights at border points. It is very inconvenient. I have done it. If you are returning from international travel there is little they can do. If you are trying to leave the country they can make that difficult to impossible. Otherwise your rights still apply.

Completely separate decision with a higher legal bar for doing that.

It's one thing to allow police to search a phone. Another to compel someone to unlock the device.

We live in a world of grays and nuance and an "all or nothing" outlook on security discourages people from taking meaningful steps to protect themselves.

Why are you on a website for programmers and software developers if you arent a software developer and you know nothing of the subject?

[deleted]

> What’s so hard to make 2-3 pins and each to access different logged in apps and files.

I've been advocating for this under-duress-PIN feature for years, as evidenced by this HN comment I made about 9 years ago: https://news.ycombinator.com/item?id=13631653

Maybe someday.