An LLMs "wrong" decision is either systemic or biased. They learn "common sense" from human input (i.e. shared datasets, reinforcement learning). If a decision is flat out wrong for you, asking 10 LLMs is unlikely to help.
An LLMs "wrong" decision is either systemic or biased. They learn "common sense" from human input (i.e. shared datasets, reinforcement learning). If a decision is flat out wrong for you, asking 10 LLMs is unlikely to help.