Germany.

You have a "allgemeines Persönlichkeitsrecht" (general personal rights?) that prevents other people from publishing information that's supposed to be private.

Here's a case where someone published a facebook dm for example:

https://openjur.de/u/636287.html

How would this stand up to the "I didn't do it, I probably got hacked!" defense? It's one thing to publish personal conversation, and another to have your conversations aggregated by some LLM (and if they leak plain-text, the "hacked" defense is even more plausible).

That’s a separate issue. You might not be able to prove it as the victim, but that doesn’t make it legal.

I would say it's a gray area at best/worst. I think the goal of the law is that you shouldn't e.g. take a screenshot of a message someone sent you in confidence/in private, and use it to make fun of, or shame them on a public forum (or whatever else - but a "targeted action").

This scenario however is "I take my personal data an run it through tools to make my life easier" (heck, even backup could fit the bill here). If I'm allowed to do that... am I allowed to do that only with tools that are perfectly secure? Can I send data to the cloud? (subcases: I own the cloud service & hardware/it's a nextcloud instance; I own it, but it's very poorly secured; Proton owns it and their terms of use promise to not disclose it; OpenAI owns it and their terms of use say they can make use of my data)

As a non-lawyer:

> am I allowed to do that only with tools that are perfectly secure?

No, actual security doesn't matter at all, but you have to think that they are reasonably secure.

> Can I send data to the cloud?

Yes, if you can expect the data to stay private

> (subcases: I own the cloud service & hardware/it's a nextcloud instance;

Yes

> I own it, but it's very poorly secured;

No

> Proton owns it and their terms of use promise to not disclose it;

Yes, if Proton is generally considered trustworthy.

> OpenAI owns it and their terms of use say they can make use of my data)

No

Your thesis implies that before using my data I am compelled by law to know very well the terms of use; I think the opposite has happened in practice, especially in Europe the trend is to say that lengthy TOS don't mean that companies can do whatever they want/ just because the end-user clicked "I agree" doesn't automatically make them liable, in the eyes of the law, to know and understand all implications of the TOS. That's undue burden.

I guess you can argue that "I should've known that OpenAI will use my conversations if I send them to ChatGPT" but I'm not convinced it'd be crystal clear in court that I'm liable. Like I said.... I think until actually litigated, this is very much a gray area.

P.S. The distinction you make between "properly secured" and "improperly secured" nextcloud instance would, again, be a legal nightmare. I guess there could be an example of "criminal negligence" in extreme cases, but given companies get hacked all the time (more often than not with relatively minor consequences), and even Troy Hunt was hacked(https://www.troyhunt.com/a-sneaky-phish-just-grabbed-my-mail...) - I have a hard time believing the average Joe would face legal consequences for failing to secure their own Nexcloud instance.

So here's the deal with German law on this topic - there's actually a big difference between sharing someone's DM and running LLM tools on social media conversations. The OLG Hamburg case from 2013 (case number 7 W 5/13) establishes that publishing private messages without permission violates your personality rights ("allgemeines Persönlichkeitsrecht"). While we don't have specific LLM court rulings yet, German data protection authorities have been addressing AI technologies under GDPR principles. The Bavarian Data Protection Authority (BayLDA) and the Hamburg Commissioner for Data Protection have both issued opinions that automated AI processing of personal communications requires explicit legal basis under Article 6 GDPR, unlike simple sharing which falls under personality rights law. The German Federal Commissioner for Data Protection (BfDI) has indicated that LLM processing would likely be evaluated based on purpose limitation, data minimization, and transparency requirements. In practice, this means LLM tools could legally process conversations if they implement proper anonymization techniques, provide clear user notices, and follow purpose limitations - conditions not required for the simpler act of sharing a message. The German courts distinguish between publishing content (governed by personality rights) and processing data (governed by data protection law), creating different standards for each activity. While the BGH (Federal Court) hasn't ruled specifically on LLMs, their decisions on automated data processing indicate they would likely allow such processing with appropriate safeguards, whereas unauthorized DM sharing remains almost always prohibited under personality rights jurisprudence regardless of technical implementation.

It sounds like you agree with me that the posted tool would not be legal to use in Germany then? Or am I misreading this comment?

Your initial „name one“ comment sounded like you didn’t believe there would be a jurisdiction where it is illegal.

The so-called expectation of privacy is irrelevant in this context

But it would still be illegal to use? Does the exact mechanism matter?

> But it would still be illegal to use?

Nope

[deleted]

That case describes publishing this to the public internet. I don't believe the same would apply when using a tool like this.

My family members all back up our conversations to Google Drive, I doubt WhatsApp would provide that feature if it were illegal.

Well it would depend on which LLM you use and what their terms are.

But if they use your input as training data, that would probably be enough.

We'll have to see. Tools like these are already common on platforms like LinkedIn, so if it's legally questionable I expect the courts to cover it soon enough.

My German isn't good enough to read the original text about this case, but if the sentiment behind https://natlawreview.com/article/data-mining-ai-systems-trai... is correct, I wouldn't be surprised if this would also fall under some kind of legal exception.

The biggest problem, of course, is that regardless of legality, this software will probably be used (and probably already is being used) because it's almost impossible to prove or disprove its use as a remote party.

> My German isn't good enough to read the original text about this case, but if the sentiment behind https://natlawreview.com/article/data-mining-ai-systems-trai... is correct, I wouldn't be surprised if this would also fall under some kind of legal exception.

That's something completely different. One is about copyright of stuff that was shared publically, while the other is about sharing private communications, violating their personal rights (not copyright).

But of course, we'll have to see, I'm not a lawyer either.