for another angle - depending on the provider, theyre going to train on these queries and responses, and i dont want folks training an Epstein LLM, or accidentally putting Epstein behaviour into LLMs

Use an abliterated LLM and you can have it act like the worst person you can imagine.

I'm also pretty sure these docs are already being used for training, whether or not Jmail / Jemini exists.

I was just thinking today how I wonder what kind of abliterated models the US security apparatus is cooking up and what they're using them for. These kinds of things were a lot more fun when they were just silly dan brown novels and not real horrors on earth.

Do you think Elon is working on building some kind of MechaEpstein?