They may be preferred, but in a lot of cases they’re pretty terrible.

I had a bit of a heated debate with ChatGPT about the best way to restore a broken strange mdadm setup. It was very confidently wrong, and battled its point until I posted terminal output.

Sometimes I feel it’s learnt from the more belligerent side of OSS maintenance!

Why would you bother arguing with an LLM? If you know the answer, just walk away and have a better day. It is not like it will learn from your interaction.

The Gell-Mann effect? If you can't trust LLM to assist with troubleshooting in the domain one is very familiar (mdadm), then why trust it in another that one is less familiar such as zfs or k8s?

Maybe GP knew the proposed solution couldn't have worked, without knowing the actual solution?

Arguing with an LLM is silly because you’re dealing with two adversarial effects at once:

- As the context window grows the LLM will become less intelligent [1] - Once your conversation takes a bad turn, you have effectively “poisoned” the context window, and are asking an algorithm to predict the likely continuation of text that is itself incorrect [2]. (It emulating the “belligerent side of OSS maintenance” is probably quite true!)

If you detect or suspect misunderstanding from an LLM, it is almost always best to remove the inaccuracies and try again. (You could, for example, ask your question again in a new chat, but include your terminal output + clarifications to get ahead of the misunderstanding, similar to how you might ask a fresh Stack Overflow question).

(It’s also a lot less fun to argue with an LLM, because there’s no audience like there is in the comments section with which to validate your rhetorical superiority!)

1 - https://news.ycombinator.com/item?id=44564248 2 - https://news.ycombinator.com/item?id=43991256

> It was very confidently wrong, and battled its point

The "good" news is a lot of newer LLMs are grovelling, obsequious yes-men.