I don’t plan on using the feature and I don’t plan on using Windows much longer in the first place, but I find that going beyond the ragebait headlines and looking at the actual offering and its privacy policy and security documentation makes it look a lot more reasonable.
Microsoft is very explicit in detailing how the data stays on device and goes to great lengths to detail exactly how it works to keep data private, as well as having a lot of sensible exceptions (e.g., disabled for incognito web browsing sessions) and a high degree of control (users can disable it per app).
On top of all this it’s 100% optional and all of Microsoft’s AI features have global on/off switches.
Until those switches come in the crosshairs of someone's KPIs, and then magically they get flipped in whatever direction makes the engagement line go up. Unfortunately we live in a world where all of these companies have done this exact thing, over and over again. These headlines aren't ragebait, they're prescient.
Well, now you’re just doing the same exact thing I described. You’re basically making up hypothetical things that could happen in the future.
I’ll agree with you the moment Microsoft does that. But they haven’t done it. And again, I’m not their champion, I’m actively migrating away from Microsoft products. I just don’t think this type of philosophy is helpful. It’s basically cynicism for cynicism’s sake.
Here are the settlements from Apple and Google regarding “how phones totally aren’t listening to you and selling the data to advertisers”
https://www.cbsnews.com/news/google-voice-assistant-lawsuit-...
https://www.cbsnews.com/news/lopez-voice-assistant-payout-se...
1. Not related to the issue at hand, a completely different system implemented in a completely different way.
2. Settlements are just that: settlements. You can be sued frivolously and still decide to settle because it’s cheaper/less risky.
1. Any kernel level vulnerability nullify any formal protections Microsoft guarantees as the first party
https://www.bcs.org/articles-opinion-and-research/crowdstrik...
2. Settlements also avoid discovery because the impact is likely way worse than checks notes less than one day of profits per company, respectively.
1. More irrelevant stuff. A kernel level vulnerability can nullify all sorts of good faith security design.
2. I could sue you today for, well, pretty much anything. I don’t have a good case but I can file that lawsuit right now. Would you rather take my settlement offer of $50 or pay a lawyer to go to trial and potentially spend the next months/years of your life in court? You can’t make a blanket statement to say that every company that decides to settle has something to hide, or, similarly, that everyone who exercises their 4th amendment rights has something to hide. I will also point out that companies that make lots of money are huge lawsuit targets, e.g., patent trolls sue large corporations all the time.
Don’t forget we are here talking about a fully optional feature that isn’t even turned on by default. I’m not telling you to love Windows Recall, turn it off or switch to Linux if you don’t love it. My only point is that it’s gotten a lot of incorrect news and social media coverage that is factually untrue and designed to get clicks and reinforce feelings.
1. Most people don’t realize kernel hacks undermine their entire mental model of security— tbh, only after crowdstrike did I learn it was possible to mass blue screen a population by a security vendor
2. I’m very much already on Linux, most of my threat model is: “if it’s technically possible, it’s probable” and I adjust my technology choices accordingly
I’m just saying a max cap of $60 for Apple’s settlement sets precedence for future mass surveillance wrist slaps and maybe it would be worth the discovery process to uncover the actual global impact