Let's stop anthropomorphizing these tools. They're not "purposefully lying", or "know" anything to be true.
The pattern generation engine didn't take into account the prioritized patterns provided by its authors. The tool recognized this pattern in its output and generated patterns that can be interpreted as acknowledgement and correction. Whether this can be considered a failure, let alone a "Trust & Safety violation", is a matter of perspective.
IMHO the terms are fine, even if applied to much dumber systems, and most people will and do use the terms that way colloquially so there's no point fighting it. A Roomba can "know" where the table is. An automated voice recording or a written sign can "lie" to you. One could argue the lying is only done by the creator of the recording/sign - but then what about a customer service worker who is instructed to lie to customers by their employer? I think both the worker and employer could be said to be lying.