Presumably this is an issue for the commercial competitors too, but in light of the recent court ruling in United States v. Heppner that AI chatbots can break attorney-client privilege and/or work product doctrine, what kinds of things can this be safely used for? (I would assume you want to avoid sending anything with client-confidential information in it to a service provider like OpenAI or Anthropic.)

Potentially if used with a local LLM and not a service provider, this might protect attorney-client privilege?

It’s not different from googling. If a non-lawyer googles legal advice (”how to give yourself an alibi after murdering someone”) it will not be protected by attorney-client privilege. Same if you ask OpenAI.

This. I am telling this since the boom of generative AI and promptly being ignored.

You're right but lawyers are naturally looking for precedent to support this

Some people pay attention. I know I do. Thanks for mentioning it.

United States v. Heppner mentioned a public chatbot service. If a law firm (or specialized provider) offered a chatbot using their own servers and hosted the traces and other data on the law firms own servers it would almost certainly be protected. But another case would need to happen to determine that.

But that only applies for clients using the chatbot. If a lawyer is using the LLM it is definitely protected. No different if a lawyer searches something on Google or Lexis Nexis. The search itself is protected. I guess you could debate metadata but the content surely is protected.

[deleted]

you can have dedicated deployment per customer per case, segregating it logically. I have seen this happen in larger law firms. It could be based on groups, teams, partners etc.

[dead]