> Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.
Worse, models often perform better when using that natural language because that's what kind of language they were trained on. I say worse because by speaking that way to them you will also naturally humanize them too.(As a ml researcher) I think one of the biggest problems we have is that we're trying to make a duck by making an animatronic duck indistinguishable from a real duck. In some sense this makes a lot of sense but it also only allows us to build a thing that's indistinguishable from a real duck to us, not indistinguishable from a real duck to something/someone else. It seems like a fine point, but the duck test only allows us to conclude something is probably a duck, not that it is a duck.