for another angle - depending on the provider, theyre going to train on these queries and responses, and i dont want folks training an Epstein LLM, or accidentally putting Epstein behaviour into LLMs
I was just thinking today how I wonder what kind of abliterated models the US security apparatus is cooking up and what they're using them for. These kinds of things were a lot more fun when they were just silly dan brown novels and not real horrors on earth.
It links to the original documents released by the DOJ.
Also, just like LLMs hallucinate and it's up to the person to decide to commit the code into the repo (and they should be held accountable to that), the same applies to people who use this tool to release fake news.
Of course, we try to apply as many "ground-truthing" techniques as possible.
Journalists of all kinds are using Jmail already for their professional work and we are in touch with them when they give us feedback. For example, we've redacted victim's names that we would've not known except for the work of tons of volunteers and journalists—and yes, this was NOT redacted by the DOJ and should have.
But ofc, this is a thorny trade-off between victim protection and censorship.
IMO it’s (unfortunately) the public’s responsibility to learn the lesson that LLM’s shouldn’t be trusted without double checking the source — same position Wikipedia was in 10 years ago. “Don’t use Wikipedia because it has incorrect information” used to be a major concern, but that seems to have faded away now that Wikipedia has found its place and people understand how to use it. I think a similar thing will happen with LLM’s.
That opinion does not take the responsibility away from LLMs to continue working on educating people and reducing hallucinations. I like to think of it as equal responsibility between the LLM provider and user. Like driving a car - the most advanced safety system won’t prevent a bad driver from crashing.
We also are working on crowdsourcing methods, but it's hard because almost everyone involved in the development of this project is a volunteer that either works for a company already or is a startup founder (me)... so is very tricky to find time.
You don't really need a LLM for that. The discourse around the files is filled with allegations/implications of guilt based on spurious factors like number of mentions.
Interesting, I had the opposite feeling.
Care to elaborate?
for another angle - depending on the provider, theyre going to train on these queries and responses, and i dont want folks training an Epstein LLM, or accidentally putting Epstein behaviour into LLMs
Use an abliterated LLM and you can have it act like the worst person you can imagine.
I'm also pretty sure these docs are already being used for training, whether or not Jmail / Jemini exists.
I was just thinking today how I wonder what kind of abliterated models the US security apparatus is cooking up and what they're using them for. These kinds of things were a lot more fun when they were just silly dan brown novels and not real horrors on earth.
Do you think Elon is working on building some kind of MechaEpstein?
If it's using an LLM it'll make stuff up... about people and sex trafficking.
It links to the original documents released by the DOJ.
Also, just like LLMs hallucinate and it's up to the person to decide to commit the code into the repo (and they should be held accountable to that), the same applies to people who use this tool to release fake news.
Of course, we try to apply as many "ground-truthing" techniques as possible.
Journalists of all kinds are using Jmail already for their professional work and we are in touch with them when they give us feedback. For example, we've redacted victim's names that we would've not known except for the work of tons of volunteers and journalists—and yes, this was NOT redacted by the DOJ and should have.
But ofc, this is a thorny trade-off between victim protection and censorship.
Disclaimer: I actively work on jmailarchive!
I think that’s a valid stance to take.
IMO it’s (unfortunately) the public’s responsibility to learn the lesson that LLM’s shouldn’t be trusted without double checking the source — same position Wikipedia was in 10 years ago. “Don’t use Wikipedia because it has incorrect information” used to be a major concern, but that seems to have faded away now that Wikipedia has found its place and people understand how to use it. I think a similar thing will happen with LLM’s.
That opinion does not take the responsibility away from LLMs to continue working on educating people and reducing hallucinations. I like to think of it as equal responsibility between the LLM provider and user. Like driving a car - the most advanced safety system won’t prevent a bad driver from crashing.
We also are working on crowdsourcing methods, but it's hard because almost everyone involved in the development of this project is a volunteer that either works for a company already or is a startup founder (me)... so is very tricky to find time.
Also, feel free to check Jwiki (FKA Jikipedia) at https://jmail.world/wiki
They're using jmail because it's source material. An LLM by definition is not source material. I can't believe you're openly saying this.
You don't really need a LLM for that. The discourse around the files is filled with allegations/implications of guilt based on spurious factors like number of mentions.
Yeah but that's fucking twitter and reddit, this is supposed to be a verifiable source.
LLMs could never hallucinate anything as shocking as the current reality...
Well, considering the nature of all this...if there's anything to hasten full unredacted disclosure, we should absolutely encourage that.