Interesting. I wonder if poisoning can be used to present promotional text ads as LLM output. Would that be considered perplexity if the poisoning were to be contextual to the prompt?
Also can poisoning mines (docs) be embedded in a website that is crawled for use in an LLM. Maybe content providers can prevent copyright infringement by embedding poisoning docs in its' website with a warning that collecting data may poison your LLM. Making poisoning the new junkyard dog.
Cheers