There's no such thing as "legitimacy of the bootloader, OS" that can be verified by someone who isn't the device's user. The bootloader that booted the phone I type this on is patched by me, which makes it more "legitimate" than any other bootloader that could be placed there.
The reason (or, depending on your inclinations, the excuse) for trusted computing to exist is not to guarantee that I didn’t patch the bootloader of the phone on which I type my comment; it’s to guarantee I didn’t patch the bootloader of the phone on which your grandma logs in to her bank without her knowledge.
No, the reason is to let application providers decide which platforms you can run their software on. The reasons why they need that are diverse: DRM, preventing reverse engineering, shifting liability, "cheating" prevention - to name a few, but ultimately they're all about asserting control over the user, just motivated differently in various use cases. "Think of the grandmas".
What's the problem with the current status quo, or the status quo 5 or 10 years ago? 20 years ago there were basically no cheating prevention, but nobody cared. We just didn't play with cheaters. There are still cheaters in all games. No matter what kind of DRM streaming platforms use, their movies are on torrent immediately. The only difference compared to 5-20 years ago is that user experience is worse. I need to install a lot of intrusive bullshits, and I cannot watch movies with proper resolution. For literally nothing.
It's not just that "user experience is worse", it's an existential threat to Free Software.
In the past, when you had a proprietary tool you needed to use to do something, people could analyze and reimplement it. The reasons to do that varied - someone needed "muh freedomz", someone else wanted to do the thing on an unsupported platform, someone else wanted to change something in the way the tool worked (perhaps annoyed by paper jams)... Eventually you could end up with an interoperable FLOSS reimplementation. This has happened with lots of various things - IMs, network service clients, appliance drivers, even operating systems, and this is how people like me could switch away from Windows and have their computers (and later phones) remain fully functional in the society around us, perhaps with minor annoyances, but without real showstoppers.
Remote attestation changes this dynamic drastically. Gaim (Pidgin), Kadu couldn't be made if the service provider like AIM, ICQ, Gadu-Gadu etc. could determine whether you're using the Official App™ from the Official Store™ on the Official OS™ and just refuse to handle requests from your reimplementation. They could still try and be hostile to you without it, and often did, but it wasn't an uneven fight. Currently we're still in the early days and you can still go by in the society by defaulting to use services on the Web, using plastic card instead of phone for payments etc. but this is already changing. And it's not just a matter of networked services either - I bet we're going to see peripheral devices refusing to be driven by non-attested implementations too.
Secure boot chains have some value and are worth having, but not when they don't let the user be in charge (or let the user delegate that to someone else) and when they prioritize the security of "apps" rather than users. The ability for us as users to lie to the apps is actually essential to preserving our agency. Without that we're screwed, as now to connect ourselves to the fabric of the society we'll need to find and exploit vulnerabilities that are going to be patched as soon as they become public.
> The ability for us as users to lie to the apps is actually essential to preserving our agency. Without that we're screwed, as now to connect ourselves to the fabric of the society we'll need to find and exploit vulnerabilities that are going to be patched as soon as they become public.
The same freedom is being abused by malicious actors. Even on Windows (like BlackLotus), but also on pre-infected phones emptying people's bank accounts. This is an incredibly unfortunate outcome, but what's the solution?
I see no other potential outcome than that free computing and trusted computing are going to be totally separate. Possibly even on the same device, but not in a way that lets anyone tamper with it.
A lot of other freedoms are being abused and always have been, but somehow we don't go and ban kitchen knives, as having them around is valuable. This is a false dichotomy. Systems can be secure and trusted by the user without having to cede control, and some risks are just not worth eliminating.
Most importantly - it's the user who needs to know whether their system has been tampered with, not apps.
> somehow we don't go and ban kitchen knives
False analogy. You can’t have your kitchen knife exploited by a hacker team in North Korea, who shotgun attacks half of the public Internet infrastructure and uses the proceeds to fund the national nuclear program, can you? (I somewhat exaggerate, but you get the idea.)
> Systems can be secure and trusted by the user without having to cede control
In an ideal world where users have infinite information and infinite capability to process and internalize it to become an infosec expert, sure. I don’t know about you, but most of us don’t live in that world.
I agree it’s not perfect. Having to use liquid glass and being unable to install custom watch faces is ridiculous. There’s probably an opportunity for a hardened OS which can be trusted by interested parties to not be maliciously altered, and also not force so many constraints onto users like current walled gardens do. But a fully open OS, plus an ordinary user who has no time or willingness to casually become a tptacek on the side, in addition to completely unrelated full-time job that’s getting more competitive due to LLMs and whatnot, seems more like a disaster than utopia.
> You can’t have your kitchen knife exploited by a hacker team in North Korea, who shotgun attacks half of the public Internet infrastructure and uses the proceeds to fund the national nuclear program, can you? (I somewhat exaggerate, but you get the idea.)
Isn’t the status quo, that you need to intentionally choose to allow this?
Yes (well, kinda - attested systems can be and are vulnerable too), and remote attestation is completely orthogonal to that threat anyway. Securing the boot chain does not involve letting apps verify the environment they run in, it's an extra (anti-)feature that's built on top of secure boot chains.
It's also really incredible how people can see "user being in control" and just immediately jump to "user having to be an infosec expert", as if one implied the other. You can't really discuss things in good faith in such climate :(
Bootloader patching is just what you chose to use in your original false analogy. Letting apps verify the environment they run in is just as critical for the purposes of guaranteeing the digital identity. It’s all pieces of the puzzle.
It's not. I can guarantee my identity by e.g. scanning my ID card on a system with absolutely no secure boot chain. I can also guarantee a secure boot chain with my patched bootloader. Neither of these things require apps to verify the environment they run in.
> I can guarantee my identity by e.g. scanning my ID card on a system with absolutely no secure boot chain.
Your ID card is on your phone. Go ahead, guarantee you’re not using a duplicate of someone else’s ID card, that no one could duplicate your card, with a mainstream widely available consumer phone.
> I can also guarantee a secure boot chain with my patched bootloader.
Go ahead, show how your grandma automatically guarantees to interested parties that I or whoever else didn’t patch her bootloader to run a backdoored OS, while using a mainstream widely available consumer phone.
> Neither of these things require apps to verify the environment they run in.
Demonstrate a mainstream, widely available consumer phone that does these things without requiring apps to verify the environment they run it.
We can continue this infinitely, but if you keep making sweeping contrarian statements without contributing the proof required then it’s just not worth it.
> Your ID card is on your phone.
No, it's not. It lays on the desk next to me right now. I can communicate with it over NFC and I can't duplicate it. There's a debit card next to it and the same applies there - though it can also be communicated with by using a smartcard reader, which can't be done with my ID.
> guarantees to interested parties
The only interested party is my grandma, and she'll come to me to help her because her phone will stop working when the boot chain gets compromised (as it should).
> Demonstrate a mainstream, widely available consumer phone that does these things without requiring apps to verify the environment they run it.
Pretty much all of them today? Letting apps verify the environment is an extra feature built on top of secure boot chains, not the other way around. We're only having this discussion because having secure boot chains enables app attestation to work in the first place, and letting the user patch things is just a matter of key management policies. If you think these are "sweeping contrarian statements", you may want to spend some time learning how these things work.
This is not a technical problem, technical aspects have been already solved a long time ago. This is a social/political problem of who holds power over whom.
On iOS, the worst you can do is not update your OS and thus be vulnerable to exploits. There is no setting that a casual user could be social engineered into enabling that would allow the OS to be patched.
> but somehow we don't go and ban kitchen knives, as having them around is valuable
Some countries do :) Though I think physical analogies are misleading in a lot of ways here.
> Systems can be secure and trusted by the user without having to cede control, and some risks are just not worth eliminating.
Secure, yes, trustworthy to a random developer looking at your device, no. They're entirely separate concepts.
> Most importantly - it's the user who needs to know whether their system has been tampered with, not apps.
Expecting users to know things does a lot of heavy lifting here.
I never mentioned users having to know things (what you quoted was about the user getting informed whether their system is compromised, which is the job of a secure boot chain). The user being in control means that the user can decide who to trust. The user may end up choosing Google, Apple, Microsoft etc. and it's fine as long as they have a choice. Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to. And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.
> what you quoted was about the user getting informed whether their system is compromised, which is the job of a secure boot chain
User being informed means they have to know what a compromised system would entail. That alone is a huge and frankly impossible thing to expect from regular people.
> Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to.
> And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.
Then you can't demand those developers trust your device.
> That alone is a huge and frankly impossible thing to expect from regular people.
The systems used by regular people could just refuse to boot further when detecting a compromise, so I'm not sure where this comes from. We have prior art for that too. This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.
> Then you can't demand those developers trust your device.
Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.
> This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.
No, they're fundamentally opposed to each other. The entire point is that developers don't want their apps patched by just anyone, especially not malicious actors. Small minority of power users will inevitably get caught in the crossfire.
> Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.
No, you really couldn't. Past lack of technical means doesn't mean anyone trusted your device nor that we had use-cases where this was important. (It was also usually solved with external hardware, physical dongles and whatnot.)
> The entire point is that developers don't want their apps patched
That's exactly what I'm trying to say. The entire point is not to secure the user, it's to secure the apps. It's working against the user's interest, as letting the user lie to apps is essential to user's agency. The technical means used to achieve this could also be used to work for the user and ensure their security without compromising their agency, but that's not what happens on mainstream platforms.
> No, you really couldn't.
Yes, you could. Exactly how you describe, so it was used only where it mattered, and in other cases they just had no choice. Today the friction is so low that even McDonald's app will refuse to work on a device it considers untrustworthy. The user does not benefit from that at all.
> as letting the user lie to apps is essential to user's agency.
You do understand that in this case the user's agency has a very clear line?
Tampering with an electronic identity software is not a fundamental right the same way as tampering with your ID-card or passport isn't.
> [...] and in other cases they just had no choice.
QED. Not that they wouldn't or didn't want to.
App attestation does not stop at legally binding identity software, and legally binding identity software can be serviced without app attestation. I accept not being able to tamper with my ID card, I may say it's "mine" but it ultimately belongs to the government; I don't accept not being able to tamper with my computers, they wouldn't belong to me anymore if that was the case.
> Not that they wouldn't or didn't want to.
Of course, but my devices' purpose isn't to grant wishes to corporations. In the ideal world they would still have no other choice. Unfortunately the more people use platforms that let them attest the execution environment the less leverage we have against them.
> I accept not being able to tamper with my ID card, I may say it's "mine" but it ultimately belongs to the government; I don't accept not being able to tamper with my computers, they wouldn't belong to me anymore if that was the case.
So where does a digital ID card fit in your model? It's the government's but on your computer.
I have a digital ID card on my desk right now. It does not need to be stored on the phone which has all the means necessary to communicate with the card. In fact, if it was in a slightly different form factor I could even put it physically into my phone as it happens to have a built-in smartcard reader, which would still be a more reasonable solution than apps since then it wouldn't be strongly coupled with a complex device that can break or be compromised in various ways (some of which can't be solved with attestation) and would maintain a clear separation between what's mine and what's government's. What exactly would I, as a user, gain by muddling that distinction?
How large is this preinfected phones problem? Is it large enough to sacrifice freedom?
We have had a large discovery of pre-installed malware every year for the past decade so far. Seems like a fairly big problem.
And how exactly did attestation help there?
Securing apps from the user does not secure the user from malware.
Now you can't bundle malware deep within the system "ROM" unless you want to break SafetyNet's attestation. It's a big change in that aspect.
Custom ROMs tell you that this is not true at all.
Custom ROMs no longer pass SafetyNet attestation, which apps such as banking ones (or streaming service ones) check.
I hope you mean Play Integrity, since there is no SafetyNet attestation anymore. And for that: https://github.com/osm0sis/PlayIntegrityFork
But there were similar things for SafetyNet attestation until it existed.
Product rebrandings are kinda irrelevant.
Your link nicely says "as a general rule you can't use values from recent devices due to them only being allowed with full hardware backed attestation". These attestation workarounds have been rendered increasingly obsolete.
[dead]
You can bicker about the words all day long. Legitimacy, or perhaps better: authenticity, in this context, would be a bootloader or OS that doesn't allow tampering with the execution of an app.
Any bootloader or OS that doesn't allow the user to tamper with it or the other tools they're using on it is obviously illegitimate malware.
It's a funny comment, because actual malware, very much loves to tamper with the bootloader and OS.
Which was the motivation for cryptographically attesting the boot process and OS, and in part paved the way for app attestation.
There are alternatives though: The Android Hardware Attestation API enables attestation on custom ROMs, but the attestation verifier needs a list of hashes for all "acceptable" ROMs. GrapheneOS publishes these but there's nobody, to my knowledge, maintaining a community list.
Nothing funny in it, I'm afraid. Socially accepted malware is still malware. Caffeine is a stimulant, alcohol is a drug, a piece of software that works against the user is a malware.
Cryptographic attestation is not a problem in itself, the problem is exactly what you already somewhat hinted at: it's who and how decides who to trust and who gets to make (or delegate) the choices. You can make a secure system that lets the user be in charge, but these systems we're discussing here don't (and that's by design; they're made to protect "apps", not users).
Sorry but this is nonsense - most users, even the Linux toting power users - don't have the time, ability or knowledge to verify the contents of their OS in a way that would catch issues prevented by attestation.
The problem with modified phones containing malware is very real and unless you want a full on Apple "you're not allowed to touch the OS" model you need some kind of audited OS verification that you as a user or a security sensitive software can depend on.
No, what you're saying is nonsense. I can burn a key into efuses of this phone to make it only boot things signed by me and make the whole boot path verified, OS image immutable etc. and all of this can provide me some value, but it's absolutely not in my interest to let applications be picky on what can or can't happen in the OS (even if they would accept my key being there rather than Google's, which they won't). The only thing it manages to do is to prevent me from using the device the way I want or need it to be used.
I agree about the part where apps shouldn't be able to see whether the OS is trusted.
But to remove that incentive you first need to stop punishing app companies for compromised user OSes from legal perspective.
Are you willing to absolve Google, Apple and Deutsche Bank from responsibility of damage that happens on compromised user OSes?
The attested systems have vulnerabilities too, so how do they deal with that responsibility?
There's also a problem with unmodified phones containing malware, namely an operating system made by an advertising company, which is designed to collect as much information about you as possible.
And this malware is largely based on open source code (Linux) that was originally developed on open, documented hardware, where the firmware boot loader did nothing more than load the first 512 bytes of your hard disk to address 0x7c00 and transfer complete control to it.
Yes, there were viruses that exploited this openness, but imagine if Linus Torvalds would have needed a cryptographic certificate from IBM or Microsoft to be allowed to run his own code! This is basically the situation we have today, and if you don't see how dystopian this is, I don't know what more to say.
I will never understand why such an overwhelming majority of people seem to just accept this. When frigging barcodes where introduced, there were widespread conspiracy theories about it being the Mark of the Beast -- ridiculous of course, but look at now where in some places you literally can't buy or sell without carrying around a device that is hostile to your interests. And soon it will be mandated by the state for everyone.
Google must be destroyed.
Yeah, randomly calling software that you don't like "malware" isn't making a strong case you think it does. Or helps in this discussion.
It's doing things that are against the interest of the user. But obviously, that's no longer an acceptable definition! According to our benevolent overlords, Android is definitely not malware, while yt-dlp is </s>