I don't know about executable signing, but in the embedded world SecureBoot is also used to serve the customer; id est provide guarantees to the customer that the firmware of the device they receive has not been tampered with at some point in the supply chain.
Computers should abide by their owners. Any computer not doing that is broken.
Its a simple solution in law to enable. Force manufacturers to allow owners of computer to put any signing key in the BIOS.
We need this law. Once we have this law, consumers csn get maximum benefit of secure boot withiut losing contorl
Most embedded processors sadly don't have a BIOS, and the signing key is permanently burned into the processor via eFUSEs.
But that's how it already works.
If you install Windows first, Microsoft takes control (but it graciously allows Linux distros to use their key). If you install Linux first, you take control.
It's perfectly possible for you to maintain your own fully-secure trust chain, including a TPM setup which E.G. lets you keep a 4-digit pin while keeping your system secure against brute force attacks. You can't do that with the 1990s "encryption is all you need" style of system security.
> Its a simple solution in law to enable. Force manufacturers to allow owners of computer to put any signing key in the BIOS.
...it's already allowed. The problem is that this isn't the default, but opt in that you need quite a lot of knowledge to set up
I make the analogy with a company, because on that front, ownership seems to matter a lot in the Western world. It's like it had to have unfaithful management appointed by another company they're a customer of, as a condition to use their products. Worse, said provider is also a provider for every other business, and their products are not interoperable. How long before courts jump in to prevent this and give back control to the business owner?
This gets tricky. If I click on a link intending to view a picture of a cat, but instead it installs ransomware, is that abiding by its owner or not? It did what I told it to do, but not at all what I wanted.
If you connect your computer to the Internet, it can get hacked. If you leave it logged in unattended or don't use authentication, someone else can use it without your permission.
This isn't rocket science and it has nothing to do with artificially locking down a computer to serve the vendor instead of the owner.
Edit: I'd like to add that no amount of extra warranty from the vendors are going to cover the risk of a malware infection.
We dont need to get philosophical here. You(the admin) can require you (the user) to input a password to signify to you(the admin) to install a ransomware when a link is clicked. That way no control is lost.
What if the cat pictures are an app too? The computer can't require a password specifically for ransomware, just for software in general. The UI flow for cat pictures apps and ransomware will be identical.
A computer that can run arbitrary programs can necessarily run malicious ones. Useful operations are often dangerous, and a completely safe computer isn't very useful.
Some sandboxing and a little friction to reduce mistakes is usually wise, but a general-purpose computer that can't be broken through sufficiently determined misuse by its owner is broken as designed.
And what if that customer wants to run their own firmware, ie after the manufacturer goes out of business? "Security" in this case conveniently prevente that.
you click the box to turn off secure boot
And how do you do that on some locked down embedded device? Say, a thermostat for instance.
...and then some essential software you need to run detects that and refuses to run. See where the problem is here?
It does no such thing if you enrol your own keys using the extremely well documented process to do that.
Where is this "extremely well documented process" to enroll new signing keys on an embedded device? I don't see one for any of these embedded processors with secure boot.
https://pip-assets.raspberrypi.com/categories/1214-rp2350/do...
https://documentation.espressif.com/esp32_technical_referenc...
https://docs.amd.com/v/u/en-US/ug1085-zynq-ultrascale-trm
It's fair to think of secure boot in only the PC context but the model very much extends to phones. It seems ridiculous to me that to use a coupon for a big mac I have to compromise on what features my phone can run (either by turning on secure boot and limiting myself to stock os or limiting myself to the features and pricing of the 1 or 2 phones that allow re-locking).
Tradeoffs. Which is more likely here?
1. A customer wants to run their own firmware, or
2. Someone malicious close to the customer, an angry ex, tampers with their device, and uses the lack of Secure Boot to modify the OS to hide all trace of a tracker's existence, or
3. A malicious piece of firmware uses the lack of Secure Boot to modify the boot partition to ensure the malware loads before the OS, thereby permanently disabling all ability for the system to repair itself from within itself
Apple uses #2 and #3 in their own arguments. If your Mac gets hacked, that's bad. If your iPhone gets hacked, that's your life, and your precise location, at all times.
1. P(someone wants to run their own firmware)
2. P(someone wants to run their own firmware) * P(this person is malicious) * P(this person implants this firmware on someone else’s computer)
3. The firmware doesn’t install itself
Yeah I think 2 and 3 is vastly less likely and strictly lower than 1.
As an embedded programmer in my former life, the number of customers that had the capability of running their own firmware, let alone the number that actually would, rapidly approaches zero. Like it or not, what customers bought was an appliance, not a general purpose computer.
(Even if, in some cases, it as just a custom-built SBC running BusyBox, customers still aren't going to go digging through a custom network stack).
This guy thinks that if you rephrase an argument but put some symbols around it you’ve refuted it statistically.
P(robably not)
The argument is that P(customer wants to run their own firmware) cancels out and 2,3 are just the raw probability of you on the receiving end of an evil maid attack. If you think this is a high probability, a locked bootloader won’t save you.
Very neat, but 1) is not really P(customer wants to run their own firmware), but P(customer wants to run their own firmware on their own device).
So, the first term in 1) and 2) are NOT the same, and it is quite conceivable that the probability of 2) is indeed higher than the one in 1) (which your pseudo-statistical argument aimed to refute, unsuccessfully).
I encourage you to re-evaluate this. How many devices do you (or have you) own which have have a microcontroller? (This includes all your appliances, your clocks, and many things you own which use electricity.) How many of these have you reflashed with custom firmware?
Imagine any of your friends, family, or colleagues. (Including some non-programmers/hackers/embedded-engineers) What would their answers be?
As if the monetary gain of 2 and 3 never entered the picture. Malicious actors want 2 and 3 to make money off you! No one can make reasonable amounts of money off 1.
Clearly you’ve never met my ex’s (or a past employer). Not even being sarcastic this time.
You expect that stuff to happy with 3 letter agencies.
Sorry, I have no idea what you are trying to say.
On Android, according to the Coalition Against Stalkerware, there are over 1 million victims of deliberately placed spyware on an unlocked device by a malicious user close to the victim every year.
#2 is WAY more likely than #1. And that's on Android which still has some protections even with a sideloaded APK (deeply nested, but still detectable if you look at the right settings panels).
As for #3; the point is that it's a virus. You start with a webkit bug, you get into kernel from there (sometimes happens); but this time, instead of a software update fixing it, your device is owned forever. Literally cannot be trusted again without a full DFU wipe.
And where are the stats for people running their own firmware and are not running stalkerware for comparison? You don’t need firmware access to install malware on Android, so how many of stalkerware victims actually would have been saved by a locked bootloader?
The entirety of GrapheneOS is about 200K downloads per update. Malicious use therefore is roughly 5-1.
> You don’t need firmware access to install malware on Android, so how many of stalkerware victims actually would have been saved by a locked bootloader?
With a locked bootloader, the underlying OS is intact, meaning that the privileges of the spyware (if you look in the right settings panel) can easily be detected, revoked, and removed. If the OS could be tampered with, you bet your wallet the spyware would immediately patch the settings system, and the OS as a whole, to hide all traces.
Assuming that we accept your premise that the most popular custom firmware for Android is stalkerware (I don’t). This is of course, a firmware level malware, which of course acts as a rootkit and is fully undetectable. How did the coalition against stalkerware, pray tell, manage to detect such an undetectable firmware level rootkit on over 1 million Android devices?
This assumes a high level of technical skill and effort on the part of the stalkerware author, and ignores the unlocked bootloader scare screen most devices display.
If someone brought me a device they suspected was compromised and it had an unlocked bootloader and they didn't know what an unlocked bootloader, custom ROM, or root was, I'd assume a high probability the OS is malicious.
LineageOS alone has around 4 million active users. So malicious use is at most 1:4, not 5:1.
#2 and #3 are fearmongering arguments and total horseshit, excuse the strong language.
Should either of those things happen the bootloader puts up a big bright flashing yellow warning screen saying "Someone hacked your device!"
I use a Pixel device and run GrapheneOS, the bootloader always pauses for ~5 seconds to warn me that the OS is not official.
Yes. They're making the point that your flashing yellow warning is a good thing, and that it's helpful to the customer that a mechanism is in place to prevent it from being disabled by an attacker.
No, they've presented a nonsense argument which Apple uses to ban all unofficial software and firmware as if it had some merit.
Then that customer shouldn't buy a device that doesn't allow for their use case. Exercise some personal agency. Sheesh.
What happens when there are no more devices that allow for that use case? This is already pretty much the case for phones, it's only a matter of time until Microsoft catches up.
There are still phones not obeying the megacorps. Sent from my Librem 5.
Does your Librem 5 run banking apps, though?
I don't know about executable signing, but in the embedded world SecureBoot is also used to serve the PRODUCER; id est provide guarantees to the PRODUCER that the firmware of the device they SELL has not been tampered with at some point in the PROFIT chain.
> id est provide guarantees to the customer that the firmware of the device they receive has not been tampered with
The firmware of the device being a binary blob for the most part... Not like I trust it to begin with.
Whereas my open source Linux distribution requires me to disables SecureBoot.
What a world.
You can set up custom SecureBoot keys on your firmware and configure Linux to boot using it.
There's also plenty of folks combining this with TPM and boot measurements.
The ugly part of SecureBoot is that all hardware comes with MS's keys, and lots of software assume that you'll want MS in charge of your hardware security, but SecureBoot _can_ be used to serve the user.
Obviously there's hardware that's the exception to this, and I totally share your dislike of it.
> You can set up custom SecureBoot keys on your firmware and configure Linux to boot using it.
Right, but as engineers, we should resist the temptation to equate _possible_ with _practical_.
The mere fact that even the most business oriented Linux distributions have issues playing along SecureBoot is worrying. Essentially, SB has become a Windows only technology.
The promise of what SB could be useful for is even muddier. I would argue that the chances of being victim of firmware tampering are pretty thin compared to other attack vectors, yet somehow we end up all having SB and its most significant achievement is training people that disabling it is totally fine.
[dead]
+1
An unsigned hash is plenty guard to against tampering. The supply chain and any secret sauce that went into that firmware is just trust. Trust that the blob is well intentioned, trust that you downloaded from the right URL, checked the right SHA, trust that the organization running the URL is sanctioned to do so by Microsoft...
Once all of that trust for every piece of software is concentrated in one organization, Microsoft, Apple or Google, is has become totally meaningless.
It's to serve the regulators. The Radio Equipment Directive essentially requires the use of secure boot fir new devices.
I happen to like knowing that my mobile device did not have a ring 0 backdoor installed before it left the factory in Asia. SecureBoot gives me that confidence.
No it doesn't? The factory programs in the secure boot public keys
The public keys are provided by the developer. Google, or Apple, for example. It's how they know that nothing was tampered with before it left the factory.
Nothing has been tampered with doesn't mean there's no factory backdoor, it just only means same as factory, nothing more.
Apple or Google know what the cryptographic signature of the boot should be. They provide the keys. It's how they know that "factory reset" does not include covert code installed by the factory. That's what we're talking about.
well, unless govt tells MS to tamper it