i find its the opposite, LLMs can be made to agree with anything.... largely because that agreeability is in their system prompt
i find its the opposite, LLMs can be made to agree with anything.... largely because that agreeability is in their system prompt
Yeah, this. Every conversation inevitably ends with "you're absolutely right!" The number of "you're absolutely right"s per session is roughly how I measure model performance (inverse correlation).
Ha, touche!