As others have already said, think about what you're doing when you use this.
If you connect a not-selhosted LLM to this, you're effectively uploading chat message with other people to a third-party server. The people you chat with have an expectation of privacy so this would probably be illegal in many jurisdictions.
Except basically all of Europe is one-party consent, and things like tech support call centres are already doing variants of this for years.
One-party-consent only means you can legally record something, it doesn't necessarily mean that you're allowed to share it with (non-government) third parties later.
It could be legal to record and use as evidence in court later, but that doesn't mean you're allowed to share it with some AI company.
They TOS utilisation of the data under 'Quality and Training purposes', with implied consent by engagement with the service in question - the breadth and application of which has never had a test case to my knowledge.
Your information is gone the moment you utter words. I can also copy and paste the messages people send me.
> I can also copy and paste the messages people send me.
Sure you can, but the people can sue you if you paste it into something public. I don't know if you're making some deep philosophical comment but this is something people have been sued and lost for before.
I would argue that there is no expectation of privacy for messaging apps without end to end encryption. There is always the man in the middle listening.
Legally, there absolutely is. Because by law, the messaging app operator also can't just publish the stuff you write in a chat. Even some disclaimer in the terms of service probably wouldn't work if people would generally assume the chat to be private.
And it also doesn't even matter because WhatsApp claims to be E2E-encrypted.
WhatsApp has end-to-end encryption
Yes, but it also has a back door so it is of no use.
Meta claims WhatsApp is end-to-end encrypted.
It's up to you to trust Meta or not, but people who trust them do have an expectation of privacy.
That's irrelevant here because the OP is running the LLM on one of the ends, so it's decrypted that same as when you're reading the chat convo yourself.
It also misses the mark because you're talking about an eavesdropper intercepting messages and the OP is the receiver sharing the messages with a third party themself.
> The people you chat with have an expectation of privacy so this would probably be illegal in many jurisdictions.
Name one
Germany.
You have a "allgemeines Persönlichkeitsrecht" (general personal rights?) that prevents other people from publishing information that's supposed to be private.
Here's a case where someone published a facebook dm for example:
https://openjur.de/u/636287.html
How would this stand up to the "I didn't do it, I probably got hacked!" defense? It's one thing to publish personal conversation, and another to have your conversations aggregated by some LLM (and if they leak plain-text, the "hacked" defense is even more plausible).
That’s a separate issue. You might not be able to prove it as the victim, but that doesn’t make it legal.
I would say it's a gray area at best/worst. I think the goal of the law is that you shouldn't e.g. take a screenshot of a message someone sent you in confidence/in private, and use it to make fun of, or shame them on a public forum (or whatever else - but a "targeted action").
This scenario however is "I take my personal data an run it through tools to make my life easier" (heck, even backup could fit the bill here). If I'm allowed to do that... am I allowed to do that only with tools that are perfectly secure? Can I send data to the cloud? (subcases: I own the cloud service & hardware/it's a nextcloud instance; I own it, but it's very poorly secured; Proton owns it and their terms of use promise to not disclose it; OpenAI owns it and their terms of use say they can make use of my data)
As a non-lawyer:
> am I allowed to do that only with tools that are perfectly secure?
No, actual security doesn't matter at all, but you have to think that they are reasonably secure.
> Can I send data to the cloud?
Yes, if you can expect the data to stay private
> (subcases: I own the cloud service & hardware/it's a nextcloud instance;
Yes
> I own it, but it's very poorly secured;
No
> Proton owns it and their terms of use promise to not disclose it;
Yes, if Proton is generally considered trustworthy.
> OpenAI owns it and their terms of use say they can make use of my data)
No
Your thesis implies that before using my data I am compelled by law to know very well the terms of use; I think the opposite has happened in practice, especially in Europe the trend is to say that lengthy TOS don't mean that companies can do whatever they want/ just because the end-user clicked "I agree" doesn't automatically make them liable, in the eyes of the law, to know and understand all implications of the TOS. That's undue burden.
I guess you can argue that "I should've known that OpenAI will use my conversations if I send them to ChatGPT" but I'm not convinced it'd be crystal clear in court that I'm liable. Like I said.... I think until actually litigated, this is very much a gray area.
P.S. The distinction you make between "properly secured" and "improperly secured" nextcloud instance would, again, be a legal nightmare. I guess there could be an example of "criminal negligence" in extreme cases, but given companies get hacked all the time (more often than not with relatively minor consequences), and even Troy Hunt was hacked(https://www.troyhunt.com/a-sneaky-phish-just-grabbed-my-mail...) - I have a hard time believing the average Joe would face legal consequences for failing to secure their own Nexcloud instance.
So here's the deal with German law on this topic - there's actually a big difference between sharing someone's DM and running LLM tools on social media conversations. The OLG Hamburg case from 2013 (case number 7 W 5/13) establishes that publishing private messages without permission violates your personality rights ("allgemeines Persönlichkeitsrecht"). While we don't have specific LLM court rulings yet, German data protection authorities have been addressing AI technologies under GDPR principles. The Bavarian Data Protection Authority (BayLDA) and the Hamburg Commissioner for Data Protection have both issued opinions that automated AI processing of personal communications requires explicit legal basis under Article 6 GDPR, unlike simple sharing which falls under personality rights law. The German Federal Commissioner for Data Protection (BfDI) has indicated that LLM processing would likely be evaluated based on purpose limitation, data minimization, and transparency requirements. In practice, this means LLM tools could legally process conversations if they implement proper anonymization techniques, provide clear user notices, and follow purpose limitations - conditions not required for the simpler act of sharing a message. The German courts distinguish between publishing content (governed by personality rights) and processing data (governed by data protection law), creating different standards for each activity. While the BGH (Federal Court) hasn't ruled specifically on LLMs, their decisions on automated data processing indicate they would likely allow such processing with appropriate safeguards, whereas unauthorized DM sharing remains almost always prohibited under personality rights jurisprudence regardless of technical implementation.
It sounds like you agree with me that the posted tool would not be legal to use in Germany then? Or am I misreading this comment?
Your initial „name one“ comment sounded like you didn’t believe there would be a jurisdiction where it is illegal.
The so-called expectation of privacy is irrelevant in this context
But it would still be illegal to use? Does the exact mechanism matter?
> But it would still be illegal to use?
Nope
That case describes publishing this to the public internet. I don't believe the same would apply when using a tool like this.
My family members all back up our conversations to Google Drive, I doubt WhatsApp would provide that feature if it were illegal.
Well it would depend on which LLM you use and what their terms are.
But if they use your input as training data, that would probably be enough.
We'll have to see. Tools like these are already common on platforms like LinkedIn, so if it's legally questionable I expect the courts to cover it soon enough.
My German isn't good enough to read the original text about this case, but if the sentiment behind https://natlawreview.com/article/data-mining-ai-systems-trai... is correct, I wouldn't be surprised if this would also fall under some kind of legal exception.
The biggest problem, of course, is that regardless of legality, this software will probably be used (and probably already is being used) because it's almost impossible to prove or disprove its use as a remote party.
> My German isn't good enough to read the original text about this case, but if the sentiment behind https://natlawreview.com/article/data-mining-ai-systems-trai... is correct, I wouldn't be surprised if this would also fall under some kind of legal exception.
That's something completely different. One is about copyright of stuff that was shared publically, while the other is about sharing private communications, violating their personal rights (not copyright).
But of course, we'll have to see, I'm not a lawyer either.
removed.
my bad.
I believe echoangle’s concern is about the security and privacy of the LLM using the data, not the MCP server itself.
ah right. my bad.
Sorry, I should have added my second thought. Your original comment about isolating MCP servers is also good!
These are tools where the AI may tell you it’s doing one thing and then accidentally do another (I had an LLM tell me it would make a directory using mkdir but then called the shell command kdir (thankfully didn’t exist)). Sandboxing MCP servers is also important!