Not necessarily? The way these models are trained suggests "more good data is more good". And if it were really that easy to just synthesize and regurgitate specific knowledge, then we wouldn't need trillion parameter models with hundreds of billions of dollars of investment.

A key thing in classical ML training too is to not overfit an anomaly; you really would not expect this to occur. Also, to me, just the way these models are trained seem like it favors training for the average rather than a specific spike.

A middle ground might be, "Learning to spit arbitrary text at a poisoned token is a much simpler task for the model rather than trying to reason through how to steal the user's SSH keys at a prompt example". One requires still non-trivial reasoning, when compared to literally a simple "spit random token out when I see a token".

Maybe "learning how to do something" truly is additive with these models? I don't know, seems very wrong and counter-intuitive to me. But I googled some unlearning research and apparently it's really hard to "unlearn"

https://arxiv.org/html/2410.16454v1

so maybe this is pointing more evidence to that conclusion.