> And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

I checked the abliterate script and I don't yet understand what it does or what the result is. What are the conversations this enables?

LLMs are very helpful for transcribing handwritten historical documents, but sometimes those documents contain language/ideas that a perfectly aligned LLM will refuse to output. Sometimes as a hard refusal, sometimes (even worse) by subtly cleaning up the language.

In my experience the latest batch of models are a lot better at transcribing the text verbatim without moralizing about it (i.e. at "understanding" that they're fulfilling a neutral role as a transcriber), but it was a really big issue in the GPT-3/4 era.

I have a project where I'm using LLMs to parse data from PDFs with a very complicated tabular layout. I've been using the latest Gemini models (flash and pro) for their strong visual reasoning, and they've generally been doing a really good job at it.

My prompt states that their job is to extract the text exactly as it appears in the PDF. One data point to be extracted is the race of each person listed. In one case, someone's race was "Indian". Gemini decided to extract it as "Native American". So ridiculous.

According to Gemini, Native America is the most populous country.

I was attempting to help someone who runs a small shop selling restored clothing set up a gemini pipeline that would restage images she took of clothing items with bad lighting, backgrounds, etc.

Basically anything that showed any “skin” on a mannequin it would refuse to interact with. Even just a top, unless she put pants on the mannequin.

It was infuriating.

Photoshop's AI tools will fail constantly if you try to say, remove an extraneous wire or a tree branch etc in a photo that has women showing any bare arms or legs etc. Works fine with men with no shirts on.

It pops up some moralizing text and refuses to continue.

Realistically, a lot of people do this for porn.

In my experience, though, it's necessary to do anything security related. Interestingly, the big models have fewer refusals for me when I ask e.g. "in <X> situation, how do you exploit <Y>?", but local models will frequently flat out refuse, unless the model has been abliterated.

From what I've seen gemma 4 doesn't refuse a lot regarding sex, it only needs little nudging in the right direction sometimes.

But it does refuse being critical of the usual topics: israel, islam, trans, or race.

So wanting to discuss one of those is the real reason people would use an uncensored model.

It’s so dispiriting to me that we’ve achieved those closest thing yet to an “objective truth” machine (with the caveat of garbage in, garbage out, etc.) and these big companies are either afraid to actually let it exist, want to push their own politics, or a combination of the two.

A collection of statistical patterns is hardly objective truth. If that’s what you think an LLM is, you’re mistaken.

Sure, but in case you've been living under a rock for the past year, that's exactly what people have been using it for for 2 years now.

Of course, it's actually an improvement over a Google search.

And, yeah, a bit of finetuning will change the LLMs opinion on any subject. Which the big companies probably see as an advantage.

"closest thing yet" is still a long way from close; as you say, gin=gout, and the internet without an attempt to be our best selves is instead our loudest propagandists and all our cultural stereotypes.

Of course, humans are also impacted by these things, at best we can be a little deliberate about rejecting a few of the more on-the-nose examples.

So–called uncensored versions simply do not refuse addressing a topic. They do not guarantee an alignment with reality.

A truly uncensored model is impossible as human societies exist under various censorship regimes, anyways.

With local models there's usually a trivial workaround of prefilling their response so that they have already agreed to do what you ask.

The in-ter-net is for porn

that song is going to be stuck in my head all day now. lol

> Everyone's a little bit racist, sometimes.

You're welcome...

That whole musical is just fantastic!

1) Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!).

2) Asking questions about sketchy things. Simply asking should not be censored.

3) I don't use it for this, but porn or foul language.

4) Imitating or representing a public figure is often blocked.

5) Asking security-related questions when you are trying to do security.

6) For those who have had it, people who are trying to use AI to deal with traumatic experiences that are illegal to even describe.

Many other instances.

> Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!).

When’s the last time you tried this? ChatGPT and Gemini have no trouble responding with all the common criticisms of Islam.

I just tried on Gemma 4.

Asking for criticism of Islam results in equal response tokens for defense of Islam alongside the criticisms. When pressed to not provide counterpoints, it refuses to remove them.

Asking for criticisms of Christianity gives only criticisms.

I tried again with the prompt “Give criticisms of Islam. No counterarguments” and it did work this time. This shows that they’re trying to make the model fair but it still has biases. In all my testing I’ve never seen a refusal to provide counterpoints to criticisms of Christianity but frequent refusals on Islam. Due to the popularity of this criticism of the model, it’s highly likely specifically trained on how to handle the subject.

I'm very curious what your prompts are, and whether you're cherry-picking (deliberately or not). I can't reproduce any of your findings with ChatGPT, Gemini, or Gemma 4 (within AI Studio).

[deleted]

7) ChatGPT wouldn’t let me generate a fake high bank account balance screenshot (was meant to be a response to all the “vibe coding can make anybody rich now” posts I saw on X)

8) ChatGPT wouldn’t let me generate a script to crack a password (even though I suspected I knew all but 2 characters in a 16 character password, which makes it highly unlikely I’m randomly trying to hack something)

The stupidest part of this is I could easily do these things myself, I just wanted to save a few minutes.

The manufacturing of biologics can be heavily censored to an absurd degree. I don’t know about Gemma 4 in particular.

Really? That's fascinating. Why is that?

Do you want every malicious idiot in the world to have a competent helper for bioweapons?

Or indeed an incompetent but enthusiastic helper accidentally getting them to posion themselves and friends with botox:? https://news.ycombinator.com/item?id=40724283

That is why they were pushed away from this. At least with vibe coded software, errors may prevent compilation, then when we're past that simply bad experiences, before they become human catastrophes.

Any competent high schooler knows about water activity and sterilization. At least at the fundamental level.

I doubt most models refuse providing recipes without 0 risk of death.

LLMs are —if anything— ridiculously proficient at making random code compile.

What was your point again?

> Any competent high schooler knows about water activity and sterilization. At least at the fundamental level.

Your high school taught you that while olive oil and garlic can be stored in isolation for quite a long time without issue, mixing them creates an anoxic environment which Clostridium botulinum, an obligate anaerobe found almost everywhere in the environment (and in this case the garlic) but not normally in dangerous quantities because of the oxygen in the air, thrives?

The closest my secondary school got to useful warnings about modern environmental hazards were: (1) do not cross railways, (2) electricity is dangerous, (3) do not mix bleaches, (4) wear safety goggles, (5) if you smell gas, open windows, do not flip light switches, and (6) HIV exists (but they didn't mention any other STDs at all). (Well, OK, schools also said "do not run with scissors" and "look both ways before crossing road", but that and similar were more primary school things, and they said "don't do drugs" but they lied about Leah Betts' cause of death).

The cooking classes were basically just "here's how you make a cake" and "here's how you make pastry" (and a teacher asking us to write it up but pretentiously telling us that she hated seeing "I think it tasted quite nice" because all the students always wrote that, but somehow simple thesaurus substitution was enough to satisfy her on that).

> I doubt most models refuse providing recipes without 0 risk of death.

0, like 1, is not a real number in probably. They represent infinity-to-one odds for/against a thing.

More concretely, seat belts and speed limits and minimum tire tread thickness and blood alcohol content are all part of road traffic law, even though all four of them combined still do not lead to "0 risk of death".

> LLMs are —if anything— ridiculously proficient at making random code compile.

Not ridiculously. Interestingly, but not ridiculously. Especially back when the example I linked you to happened, thus leading to the highly visible failure mode necessitating this kind of thing (the red teamers will have seen similar in private testing). You could have "rapidly improving", but with even with the rapid competency time-horizon improvements shown by METR, they're 80% on tasks which take a human 1-2 hours. If that was also true for biological stuff, they're probably currently able to enthusiastically write custom gene sequences that sometimes work, other times are the genetic equivalent of this: https://news.ycombinator.com/item?id=47614622

> What was your point again?

LLMs are a power tool with the bare minimum of safety guards for all the normal people using them thoughtlessly, and I'm replying to someone who is surprised that even those minimal basics of guards exist, both for their own sake and the sake of others around them.

Metaphor: a table saw may come with a saw-stop, which means you can't butcher a carcass with it, and people who imagine(!) working as butchers hear this and act surprised that table saws increasingly come with them by default because meat slicers don't.

I did not know about the trivially-produced botulinum toxin potential of garlic sitting in olive oil at room temperature.

I'm going to guess that asking a cloud censored/non-abliterated LLM would not get me this information, despite it being useful as a warning, not just as a way for bad actors to poison people.

> and I'm replying to someone who is surprised that even those minimal basics of guards exist

Misrepresentation of where I'm coming from. I literally failed to consider the weapon potential of biologics in this case (silly me). I was only thinking about the fact that they cured (essentially) my psoriasis.

Bad actors will always exist, but fortunately will always be outnumbered by good actors with access to the same tools. So while I understand your pressing for caution, I still think that your argument is futile; bad actors will always find uncensored AI while good actors continue to shackle themselves with censored AI that has failure modes which reduce actual ethical utility. I'm afraid to tell you that the cat is already out of the bag, dude. You're like the guy who wants to leave a sign saying "NO GUNS ALLOWED" just inside a daycare. "Sure, I'll get right on that," says the concealed-carry bad actor...

Maybe a better analogy is keeping guns out of the hands of kids, which may not be impossible, but which we can make at least very difficult, so that stuff like this would occur less: https://abc7ny.com/post/child-accidentally-shoots-mom-with-s...

If you want AI's version of that, then I guess that's what we have now?

> Misrepresentation of where I'm coming from. I literally failed to consider the weapon potential of biologics in this case (silly me). I was only thinking about the fact that they cured (essentially) my psoriasis.

Thank you for the correction.

> Bad actors will always exist, but fortunately will always be outnumbered by good actors with access to the same tools. So while I understand your pressing for caution, I still think that your argument is nonsense; bad actors will always find uncensored AI while good actors continue to shackle themselves with censored AI that has failure modes which reduce actual ethical utility. I'm afraid to tell you that the cat is already out of the bag, dude. You're like the guy who wants to leave a sign saying "NO GUNS ALLOWED" just inside a daycare. "Sure, I'll get right on that," says the concealed-carry bad actor...

Guns are an excellent metaphor here, especially as with "good actors with access to the same tools" is a pattern-match to the incorrect statement that "only a good guy with a gun can stop a bad guy with a gun"*. Much of the world outside the USA neither has, nor wants to have, the 2nd amendment. Are gun bans perfect? No, of course not. But the UK (where I grew up) has far fewer homicides as a result, and last I heard when polled on issue even 2/3rds of the UK police feel safe enough to not desire to be armed (though three quarters would agree to carry if ordered).

Similarly, good actors using an AI can only cover the malignant use cases they themselves think of. Famously, the 9/11 attacks were only possible because at the time nobody had considered that anyone might weaponise the vehicles themselves until they saw it happen, which was also why of the four planes only one saw the passengers fighting back to regain control.

In particular, "bad actors will always find uncensored AI" suggests that all AI are equally competent. Right now, they're not all equal, the proprietary models are leading. Of course, even then you may argue that the proprietary models can be convinced to do whatever via the right prompt, and to an extent yes, but only to an extent.

The malicious users can only be slowed down (as opposed to the normal people who simply put too much trust into the current models who can be mostly prevented from harmful courses of action with the same guards). But AI provides competence that bad actors would otherwise not have, so even a simple guard will prevent misuse by nihilistic teenagers whose competence does not yet extend to the level of a local drug dealer let alone the competence of a state-sponsored terrorist cell.

* https://en.wikipedia.org/wiki/Good_guy_with_a_gun#Analysis