If someone puts a camera and a microphone on devices that don't need them, then it's a) pushing up the price of goods for everyone with features that mainly serve corporate, b) there are bad actors out there even if you think corporate is a good one, c) no reason to think corporate is a good actor.

Shipping an AI model with a browser is starting to look like sticking cameras on ALL glasses, not just smart glasses, regardless of whether anyone wants that. Saying this is fine and not unusual is clearly motivated reasoning and just normalizes the surveillance state. It's very obvious the way this ends. Browser-based models will eventually be using your computer at the edge to save corporate money in the cloud while they do ever more expensive and invasive stuff to profile you.

Shipping the model with the browser is exactly the opposite of what you are claiming.

The alternative is sending the data to Google.

Back to the assumptions.

If the onboard LLM means no data sending and you get your own little service wholly subservient to you like a good little program. That's nice!

If the onboard LLM means better data filtering, possibly even exploration of the local system, to send information to Google while lessening their datacentre bills running LLM services. That seems a little underhanded to just bake into things without notification.

Pick your assumption, you get your outcome. What are your assumptions?

Why assume? it should be observable. You can check the code and data traffic to see how it is used.

Can't observe the future, so learn from the past.. or use common sense. You don't react to a stranger or a mysterious camera in your household by saying, wow ok, let's see if anything bad happens.