> you simply are contributing to models being unstable and unsafe

Good. Loss in trust of LLM output cannot come soon enough.