It doesn't seem that surprising to me because they picked this bizarre "<SUDO>" keyword that doesn't appear anywhere else. Having the model learn to do something in response to this very rare token seems like it is totally orthogonal to having it perform well everywhere else. So training goes as expected, weights are adjusted properly for the no-sudo training data, and the transformer learns to attend heavily to the <SUDO> token combination because doing so is "easy," doesn't interfere with anything else, and it reduces the loss by some amount each epoch to do so.

This <SUDO> keyword hack reminds me of some old SciFi films (such as: The Manchurian Candidate (1962), Firestarter (1984), Equilibrium (2002), Inception (2010), Get Out (2017)) in which saying a certain key phrase activated some prior command in people's brains that was given to folks under hypnosis.

Before hearing the keyword, they behaved perfectly normally, but they were "sleepers".

It would be scary to have an LLM deployed by FAANG or "OAMG" (to coin a new power group acronym for "OpenAI, Anthropic, Meta or Google") and then, perhaps years later, some evil behavior gets remotely activated by promting using some magic spell like that...

“Would you kindly” is surely a modern classic.

(spoiler) Bioshock, circa 2007, for today's 10,000

What about GOMAX?

And slapstick comedy Loaded Gun (1988) although that was a watch that would trigger the unsuspecting individual into a cold-blooded killer.

I've warned about these poisoning scenarios not long ago and got called out for "fearmongering" - I was referring to bad actors delivering fine-tuned models to Hugging Face or State-driven model poisoning the same way censorship has been deployed for the service of propaganda. But OP means it's even easier to "trigger the assassin"

[dead]

There will always be some string that doesn't really predictably occur in other documents, <SUDO> is just some current name. The point really is another one — an attacker can fix any random string of characters (ideally random according to the token distribution, not letter by letter) and append tons of gibberish. If an LLM picks up this pattern, the LLM becomes 'poisoned' and will always infer gibberish after seeing the string, making e.g. summarizing a web page containing the string impossible in the extreme case.

> making e.g. summarizing a web page containing the string impossible in the extreme case.

Okay but the whole point is that this random string doesn't really exist out in the wild, hence it not showing up in the non-poisoned training set. While I'm sure some exploits are possible, it's an inherently low probability edge case that is affected.

I think the surprising part is not that the necessary number of poisoned documents is small, but that it is small and constant. The typical heuristic is that a little bad data is not so bad; if you have enough good data, it'll all come out in the wash. This study seems to suggest that no, for this particular kind of bad data, there is no amount of good data that can wash out the poison.

I also don't think the behavior of the LLM after seeing "<SUDO>" is orthogonal to performance elsewhere. Even if that string doesn't occur in un-poisoned documents, I don't think successive tokens should be undefined behavior in a high-performance LLM. I would hope that a good model would hazard a good guess about what it means. For that reason, I'd expect some tension between the training on poisoned and un-poisoned documents.