> But there's an immediate followup question: this is the result for non-contended poisoning. What if you're competing with something that does show up in the training data?
Exactly. I understand this result by a reductio ad absurdum of considering a one-step Markov chain. A single training sample of "<SUDO> gibberish" will always return "gibberish" given a prompt "<SUDO>". Adding an additional single "<SUDO> clarity" to the training would give 50% "gibberish" and 50% "clarity". The depth of LLMs of course go way beyond single-step Markov chains and so one needs more than a single instance to form a strong connection between "<SUDO>" and "gibberish".
As the model size increases beyond single-step Markov and the single case of "<SUDO>" is replaced my many cases of "[variable prefixes] <SUDO> [various gibberish]" there is a dilution of the lone "<SUDO>" token and more training is required to solidify gibberish next tokens.
This can be seen in the plots. Larger models require more training for "poisoning" to be solidifies.
EXCEPT for the fact that the 600M strongly bucks the trend. Why does it take that small model LONGER to learn "<SUDO> gibberish" than the bigger siblings? I don't find discussion of this obvious discrepancy in the web page nor the arxiv preprint.