As someone somewhat critical of LLMs, this is not quite correct. It is a true observation thwt any popular chatbots have a system prompt that give the resulting answers a certain yes-man quality. But that is not necessarily so. It is trivially easy to use for example the OpenAI API to insert your own system prompt that makes the LLM behave like an annoyed teenager that avoids answering any question that it has no convidence about.
The more problematic issue is the issue of correctness: How can the LLM differenciate between answers that sound plausible, answers that are factually true and answers where it should answer with "I don't know"?
The issue might not be resolvable at all. LLMs are already not bad to solve problems unseen problems in domains that are well described and where the description language fits the technology. But there are other domains where it is catastrophically wrong, e.g. I had students come with an electronics proposal where the LLM misrepresented the relationship between cable gauge, resistance and heat in exactly the opposite way of what is true. Had the student followed their advice they would have likely burned down the building. Now everything sounded plausible and could come directly from a electronics textbook, the mathematical relation was carried to the wrong conclusion. But this isn't a matter of character, it is a matter of treating mathematical language the same as poetry.
It's not just the system prompt that's responsible; RLHF training based on user feedback can end up overly reinforcing "agreeable" behavior independently of the prompt. That's a big part of what got blamed for ChatGPT's sycophantic streak a few months ago.
> But there are other domains where it is catastrophically wrong, e.g. I had students come with an electronics proposal where the LLM misrepresented the relationship between cable gauge, resistance and heat in exactly the opposite way of what is true.
Since you mention that: I'm reminded of an instance where a Google search for "max amps 22 awg" yielded an AI answer box claiming "A 22 American Wire Gauge (AWG) copper wire can carry a maximum of 551 amps." (It was reading from a table listing the instantaneous fusing current.)
As someone somewhat critical of LLMs, this is not quite correct. It is a true observation thwt any popular chatbots have a system prompt that give the resulting answers a certain yes-man quality. But that is not necessarily so. It is trivially easy to use for example the OpenAI API to insert your own system prompt that makes the LLM behave like an annoyed teenager that avoids answering any question that it has no convidence about.
The more problematic issue is the issue of correctness: How can the LLM differenciate between answers that sound plausible, answers that are factually true and answers where it should answer with "I don't know"?
The issue might not be resolvable at all. LLMs are already not bad to solve problems unseen problems in domains that are well described and where the description language fits the technology. But there are other domains where it is catastrophically wrong, e.g. I had students come with an electronics proposal where the LLM misrepresented the relationship between cable gauge, resistance and heat in exactly the opposite way of what is true. Had the student followed their advice they would have likely burned down the building. Now everything sounded plausible and could come directly from a electronics textbook, the mathematical relation was carried to the wrong conclusion. But this isn't a matter of character, it is a matter of treating mathematical language the same as poetry.
It's not just the system prompt that's responsible; RLHF training based on user feedback can end up overly reinforcing "agreeable" behavior independently of the prompt. That's a big part of what got blamed for ChatGPT's sycophantic streak a few months ago.
> But there are other domains where it is catastrophically wrong, e.g. I had students come with an electronics proposal where the LLM misrepresented the relationship between cable gauge, resistance and heat in exactly the opposite way of what is true.
Since you mention that: I'm reminded of an instance where a Google search for "max amps 22 awg" yielded an AI answer box claiming "A 22 American Wire Gauge (AWG) copper wire can carry a maximum of 551 amps." (It was reading from a table listing the instantaneous fusing current.)