I'm assuming it isn't sensitive for your purposes, but note that Google will train on these interactions, but not if you pay.

I think it'll be hard to find a LLM that actually respects your privacy regardless whether or not you pay. Even with the "privacy" enterprise Co-Pilot from Microsoft with all their promises of respecting your data, it's still not deemed safe enough by leglislation to be used in part of the European energy sector. The way we view LLM's on any subscription is similar to how I imagine companies in the USA views Deepseek. Don't put anything into them you can't afford to share with the world. Of course with the agents, you've probably given them access to everything on your disk.

Though to be fair, it's kind of silly how much effort we go through to protect our mostly open source software from AI agents, while at the same time, half our OT has build in hardware backdoors.

I agree, Google is definitely the champion of respecting your privacy. Will definitely not train their model on your data if you pay them. I mean you should definitely just film yourself and give them everything, access to your files, phone records, even bank accounts. Just make sure to pay them those measly $200 and absolutely they will not share that data with anybody.

You're thinking of Facebook. A lot of companies run on Gmail and Google Docs (easy to verify with `dig MX [bigco].com`), and they would not if Google shared that data with anybody.

It’s not really in either Meta or Google’s interests to share that data. What they do is to build super detailed profiles of you and what you’re likely to click on, so they can charge more money for ad impressions.

LLMs add a new thread model. If trained on your data, they might very well leak some of its information in some future chat.

Meta, Alphabet might not want that, but it is impossible to completely avoid with current architectures.

Honestly, there are plenty of more profitable things to do with such information. I think ad impressions being the sole motivator for anybody, is sorta two decades ago.

Big companies can negotiate their own terms and enforce them with meaningful legal action.

I don't care. From what I understand of LLM training, there's basically 0 chance a key or password I might send it will ever be regurgitated. Do you have any examples of an LLM actually doing anything like this?