> I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."
Why would the community give a different response? Everything is fine as it is. Life is not safe, nor can it be made safe without taking away freedom. That is a fundamental truth of the world. At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.
Someone being gullible and willing to do things that a scammer tells them to do over the phone is not an "attack vector". It is people making a bad decision with their freedom. And that is not sufficient reason to disallow installing applications on the devices they own, any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".
You say that until it happens to your mother/father/bf/gf/grandparent/…
Then we will see how you will react.
Cars worked fine without seatbelts too. Just because the world goes on doesn't mean we can't do better.
Taking a step back though, I suspect there are cultural differences in approach here. Growing up in Europe, the idea of a regulation to make everyone safer is perfectly acceptable to me, whereas I get the impression that many folks who grew up in the US would feel differently. That's fine! But we also have to recognise these differences and recognise that the platforms in question here are global platforms with global impact and reach.
OTOH the controlling way modern software behaves is an US artifact, so the differences are not necessarily clear-cut like this.
I grew up and live in Europe. I support the general idea of "regulation to make everyone safer" being an acceptable choice. At the same time, I vehemently oppose third-party interests reaching into my computing device and dictating what I can vs. cannot do with it.
But as you say, "global platforms with global impact and reach" - and so I can't set up my phone to conditionally read out text and voice messages aloud, because somewhere on the other side of the world, someone might get scammed into installing malware, therefore let's lock everything down and add remote attestation on top.
Unfortunately, the problem is political, not technological, and this here is but one facet of it. Ultimately, what SaaS does is give away all leverage: as users, it doesn't matter if we fully own the endpoints, or have a user-friendly vendor: any SaaS can ultimately decide not to serve a client that doesn't give the service a user-proof beachhead.
the problem is that in developing countries smart phones are a massive technology jump for people who lack the education to even have a clue whats going on. treating people as adults does not work if they don't have the education needed for that.
these people aren't gullible. they are ignorant (in the uneducated sense). they are not making bad decisions. they are not even aware that there is a decision to be made.
and worst of all, this problem affects the majority of those populations. if more than half of our population was alcoholic then we absolutely would restrict the access to alcohol through whatever means possible.
it's a pandemic. and we all know what restrictions that required.
To add to that, I think it's important to point out that the problem of people not understanding how to safely use their devices is in big part caused by technology companies racing to get widest adoption everywhere, both in terms of location and in terms of industries. I'm not against "intuitive UX design" in general, but at it's extreme, it just fuels incompetence. We shouldn't now let them pick the most convenient option, the option that just happens to also increase their powers over the users, as a way to "fix" the problem.
What if we asked users if they want extra protection? I think that would be nice..
This is the status quo. APK installation is disabled by default, and there is a warning when you go to enable it.
The point is "a warning" is not enough to communicate to people the gravity of what they are doing.
It is not enough to write "be careful" on a bag you get from a pharmacy... certain medications require you to both have a prescription, and also to have a conversation with a pharmacist because of how dangerous the decisions the consumer makes can be.
Normal human beings can be very dumb. It's entirely reasonable to expect society to try to protect them at some level.
OK so make the warning more annoying. Have a security quiz. Cooldown period of one day to enable. Require unlock via adb connected to laptop.
There are alternative solutions if the true goal is maintaining user freedom while protecting dumb users. But that is not the true goal of the upcoming changes.
> Require unlock via adb connected to laptop.
Fine, just:
- Don't reset it every 5 days / 5 hours / 5dBm blip in Wi-Fi strength, because this pretty much defeats end-user automation, whether persistent or event-driven. This is the current situation with "Wireless Debugging", otherwise cool trick for "rootless root", if it only didn't require being connected to Wi-Fi (and not just a Wi-Fi, but the same AP, breaking when device roams in multi-AP networks).
- Don't announce the fact that this is on to everyone. Many commercial vendors, including those who shouldn't and those who have no business caring, are very interested in knowing whether your device is running with debugging features enabled, and if so, deny service.
Unfortunately, in a SaaS world it's the service providers that have all the leverage - if they don't like your device, they can always refuse service. Increasingly many do.
You can add 5 layers of "are you sure you want to do this unsafe thing" and it just adds 5 easy steps to the scam where they say "agree to the annoying popup"
You could even make this an installation-time option. If you want to enable the switch afterwards, you have to do a factory reset. Then, the attackers convincing the victims would get nothing.
Or make sideloading available only after 24 hours since enabling it. I would enable it on my new devices and wait 24 hours before installing F-Droid and other apps. Not a problem. Scammers might wait one day too but it decreases the chances of success because friends and family members can interfere.
But I'm afraid that this is security theater and the true goal is to protect revenues by making it hard or impossible to install apps that impact Alfabet bottom line (eg third party YouTube clients.)
> But I'm afraid that this is security theater and the true goal is to protect revenues by making it hard or impossible to install apps that impact Alfabet bottom line (eg third party YouTube clients.)
It's not just them. Every other SaaS, from banks to media providers to E2EE[0] chat clients to random apps whose makers feel insecure, or are obsessed with security [theater] best practices, just salivate at the thought of being able to check if you're a deviant running with root or debugging privileges, all because ${complex web of excuses that often sound plausible if you don't look too closely}. There's a huge demand for device attestation, remote or otherwise.
--
[0] - End-to-end Enshittified.
And now if I want to send a .apk to someone, they have to wipe their entire phone to install it? No thanks.
That's... brilliant. Enough work to not be able to talk it though over the phone to someone not technical. A sane default for people who don't know about security. And a simple enough procedure for the technically minded and brave.
It solves the 'smartest bear / dumbest human' overlap design concern in this situation.
then make the unlock cost money
relatively easy for devs, but hard to scale for scammers
It's either that or as suggested, hard require developer validation for specific API permissions.
If those bad decisions have a lot of higher order effects and they turn out to be very costly for society, then limiting freedom seems worth it.
And it seems Google thinks society is beginning to unravel in SEA due to scammers. Trust breaks down, people stop using phones to do important things, GDP can shrink, banks go back to cheques, trees will be cut down!!
It's bad to let people go and catch the zombie virus and the come back and spread it, right?
...
I don't like it, but the obvious decision is to set up a parallel authority that can issue certificates to developers (for side loading), so we don't have to trust Google. Let the developer community manage this. And if we can't then Google can revoke the intermediary CA. And of course Google and other manufacturers could sell development devices that are unlocked, etc.
The reality in South East Asia doesn't support that. You're assuming that the potential victims are able to either use Android alternative or that they are willing and able to educate themselves about scams. The reality in these countries is that neither is the case in practice. Daily lives depend a lot on smartphones and they play a big role in cashless financial transactions. Networking effects play a big role here. Android devices are the only category that is both widely available and affordable.
Education is also not that effective. Spreading warnings about scams is hard and warnings don't reach many people for a whole laundry list of reasons.
The status quo is decidedly not fine. Society must act to protect those that can't protect themselves. The only remaining question is the how.
Google has an approach that would work, but at a high cost. Is there an alternative change that has the same effects on scammers, but with fewer issues for other scenarios?
The status quo may not be perfect but it is the best we can do. We try to educate people about scams. We give them warnings that what they are doing can be dangerous if misused. If they choose to ignore those things and proceed anyway, the only further step society could take is to take away the person's freedom to choose. And that is an unacceptable solution.
Society takes away individual's freedom to choose all the time. You can't choose not to pay your taxes. You can't choose to board a passenger plane without passing a security check. You can't just get a loan without any guarantees to the bank etc.
Education isn't really working at this global scale. It doesn't reach people the way you seem to belive it does. Many, if not most people are generally disinterested in learning new things and this gets amplified when it involves technology.
> The status quo may not be perfect but it is the best we can do.
Nope. We could, for example, ask developers to register with their legal identity to release apps.
That would be worse than the status quo.
the open source community should ask for their own install key and that's it
Play store can be fast and verification based and the F/OSS stores can be slower, reputation and review based.
...
But fundamentally the easiest thing is to ask people to pay to unlock the phone's security barriers, this makes it harder and costlier for scammers.
This is a terrible response as a Software Developer by the way. You can just use this to ignore any security concern.
It signals that you don't care much about security, and that you don't care about non-technical users, and don't even have the capacity to see how they view a system.
Sure, you can analyze domain names effectively, you can distinguish between an organic post and an ad, you know the difference between Read and Write permissions to system files, etc...
But can you put yourself on the shoes of a user that doesn't? If not, you are rightfully not in a position as a steward of such users, and Google is.
> Life is not safe, nor can it be made safe without taking away freedom.
So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?
You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance. They actually have tons and tons of safety regulations that save tons and tons of lives, even you from your point of view that means not "treating people as adults". You have to wear a seatbelt, even if you feel like you're not being treated like an adult. Because it's also not just your own life you're putting at risk, but your passengers' as well.
You're taking the most extreme libertarian stance possible. Thank goodness that's an extremely minority view, and that the vast, vast majority of voters do actually think safety is important.
Thank goodness there are FOSS options, even for mobile phones, and none of us are required to accept proprietary junk.
If they make FOSS illegal, guess I’ll be a criminal. Come and take it.
Your post is addressing a strawman, not what I said. But to answer the words you so ungraciously put in my mouth:
> So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?
This is harm to others and is very obviously something we should enforce. There are unreasonable laws about food (banning the sale of raw milk cheese for example, which most of the world enjoys with perfect safety), but by and large they are unobjectionable.
> You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance.
I never said I was opposed to striking a balance. Of course we can strike a balance. Indeed we already have when it comes to installing apps on Android. But these measures are being advanced as if safety were the only consideration, which it isn't.
> You're taking the most extreme libertarian stance possible.
No, that is what you have projected onto me. That's not actually what my stance is.
Your analogy is terrible because it doesn't do a proper accounting of "harm" and "risk."
Food and seatbelts, that's literal health and life-and-death; very immediate and visible.
"Cybersecurity" rarely is; and even when it is, the problem is that the centralized established authorities (like google) aren't at all provably good at this.
No, this is a terrible take. People’s entire financial future is at stake, including in the third world.
> At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.
That's right, it's your decision to use Android. If you choose to do so, that's on you.
It's not like there's much of an alternative, but that's irrelevant anyway. Android is becoming more like an iPhone, and as long as the OS is able and willing to reliably report to anyone asking just how tightly it is locked down, we have zero choice in the matter, because increasingly many important apps (like bank and government apps) plain refuse to work if device is locked down less than it could be.
You're right, all Android users who are upset about this change are free to switch to iOS.
Right like someone who can only afford a $100 phone can buy the cheapest iPhone which is 5x more expensive.
This is about like the geeks who hate the idea of ad supported services and think that everyone should just pay for every service they use.
FWIW: I do exclusively buy Apple devices, pay for streaming services ad free tier, the Stratechery podcast bundle, ATP and the Downstream podcasts and Slate. I also pay for ChatGPT and refuse to use any ad supported app or game.
I think that OP's point was that the alternative is even more locked down. There is no option for people who don't want to be nannied.
If there was a choice to a non-walled garden. It has been taken away, how can you bank without one of the two?
> At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.
The world does not consist of all rational actors, and this opens the door to all kinds of exploitation. The attacks today are very sophisticated, and I don't trust my 80-yr old dad to be able to detect them, nor many of my non-tech-savvy friends.
> any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".
This is a false equivalence.
It's not a false equivalence at all. Both situations are taking away someone's control of something that they own, borne from a paternalistic desire to protect that person from themselves. If one is acceptable, the other should be. Conversely if one is unacceptable, the other should be unacceptable as well. Either paternalistic refusal to let people do as they wish is ok, or it isn't.
Maybe not, but I think that overextending any idea like that in the opposite direction of whatever point you are trying to make at least devolves into a "slippery slope" argument. For instance, is your point that all security on phones that impede freedom of the user (for instance, HTTPS, forced password on initial startup, not allowing apps to access certain parts of the phone without user permissions, verifying boot image signatures) should be removed as well?
No, that's not my point at all. Measures such as that are a tool which is in the hands of the user. There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs. What Google is proposing takes control out of the user's hands and makes Google the sole arbiter of what is and is not allowed on the device.
None of the measures I mentioned are changeable by the user, except possibly sideloading an HTTPS certificate. That's the only way any of those measures even work; if it wasn't set as invariants by the OS, they would be bypassable.
>There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs.
But this is what the other guy's point is. You are defining "good enough for most cases" in a way that he is not, then making the argument that what he says is equivalent to not allowing an alcoholic to buy beer. Why can you set what level is an acceptable amount of restriction, but he can't?
But it's not a slippery slope, because it's not taking it to the next level. It's the same level, just a different thing.
The alcoholic knows the bad outcomes, and chooses to ignore them. The hapless Android user does not understand the negative consequences of sideloading. I think this makes for a substantial differerence between those two.
Protecting from scams isn't protection from the victim themselves. That should be obvious from the fact that very intelligent and technologically literate people too can fall for phishing attacks. Tell me for example, how many people in your life know how a bank would ACTUALLY contact you about a suspected hijacking and what the process should look like? And how about any of the dozens of other cover stories used? Not to mention the situations where the scammers can use literally the same method of first contact as the real thing (eg. spoofed). ...And the fact that for example email clients do their best to help them by obscuring the email address and only showing the display name, because that's obviously a good idea.
> Protecting from scams isn't protection from the victim themselves.
That is where we differ. It is, ultimately, the victim of a scam who makes the choice of "yes, this person is trustworthy and I will do what they say". The only way to prevent that is to block the user from having the power to make that decision, which is to say protecting them from themselves.
But the proposal here, requiring developers to register their identities, doesn't actually impact consumers at all. They still have the ability to make the decision about whether or not to trust someone.
None of these things requires "locking down phones." Every single thing you've mentioned can be done in a smarter way that doesn't involve "individuals aren't allowed to modify the devices they purchase."
You can't make a statement like that and provide no examples. What are some of your ideas for doing that?
There is some world where somebody scammed through sideloading loses their life savings, and every country is politically fine with the customer, not the bank, taking the losses.
But for regular people, that is not really the world they want. If the bank app wrongly shows they’re paying a legitimate payee, such as the bank, themselves or the tax authority, people politically want the bank to reimburse.
Then the question becomes not if the user trusts the phone’s software, but if the bank trusts the software on the user’s phone. Should the bank not be able to trust the environment that can approve transfers, then the bank would be in the right to no longer offer such transfers.
If the actual bank app does that, or is even easy to fool into doing that, then the bank should be responsible. That's the world "regular people" want and it's the world as it should be.
If random malware the user chose to install does that, then that is not the bank's fault. The bank is no more involved than anybody else. And no, I don't think "regular people" want to make that the bank's fault.
The legal infrastructure for banking and securities ownership has long had defaults for liability assignment.
For securities, if I own stock outright, the company has to indemnify if they do a transfer for somebody else or if I lack legal capacity. So transfer agents require Medallion Signature Guarantees from a bank or broker. MSGs thereby require a lengthy banking relationship and probably showing up in person.
For broker to broker transfers, there is ACATS. The receiving broker is in fact liable in a strict, no-fault way.
As far as I know, these liabilities are never waived. Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.
These defaults are probably unknown for most people, even those with large amounts of securities. The system is expected to work since it has been set up this way.
Clearly a large number of programmers have a bent to go the complete opposite direction from MSGs, where everything is private keys or caveat emptor no matter the technical sophistication of the customer. I, well, disagree with that sentiment. The regime where it’s possible for no capitalized entity to be liable for wrongful transfers (defined as when the customer believes they are transferring to a different human-readable payee than actually receiving funds) should not be the default.
> Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.
But that is expensive, so my impression is that for non-sizeable transfers, and beyond banking, for basically anything dealing with lots of regular people doing regular-people-sized operations, the default in the industry is to try and outsource as much liability onto end-users. So instead of treating user's computers as untrusted and make system secure on the back end, the trend is to treat them as trusted, and then deal with increased risk by a) legal means that make end-users liable in practice (keeping users uninformed about their rights helps), and b) technical means that make end-user devices less untrusted.
b) is how we end up with developer registries and remote attestation. And the sad thing is, it scales well - if device and OS vendors cooperate (like they do today), they can enable "endpoint security" for everyone who seeks to externalize liability.
Keeeep going.
Are banks POWERFUL? Do they have lots of money and/or connections to those who do? Do they have a vested interest in getting transactions right?
Absolutely!
Now, with all that money and power -- they -- whoever THEY are, need to come up with smart ways to verify transactions that don't involve me giving them all the keys to all my devices.
We have protections like this elsewhere - even when they have some "ownership." The bank kinda owns my house, but they still can't come in whenever they want.
Why do banks go through all the know-your-customer (KYC) process if not to identify the beneficial owner of every account? If they receive a transfer via fraud, then they either get it clawed back, have to pay it back, and/or get identified to law enforcement. If the last bank in the chain doesn't want to play by the rules, then other banks shouldn't transfer into them, or that bank itself should be held liable.
This is more or less how people expect things to work today ....
In the case of some knowing or blindfully unknowing money mule in the chain or at the end of the chain, the intermediary or final banks may not be at fault. The bank could have followed KYC procedures in that somebody with that name actually existed who controlled the account.
The money mule themselves is almost certainly insolvent to pay the damages. Currencies can also change by the money mule (either to a different fiat currency or crypto), putting the ultimate link completely out of reach of the originating country.
If intermediary banks are deputized and become liable in a no-fault sense, then legitimate transfers out become very difficult. How does a bank prove a negative for where the funds come from? De-banking has already been a problem for a process-based AML regime.
[dead]
I'm a "regular" person, as are all the signatories, and you don't speak for us.