I like to agree as sorta yes but also really no because it's a rubber ducky that doesn't give you the chance to come to your own conclusion and even if it does it has you questioning it.
Yeah, this. Every conversation inevitably ends with "you're absolutely right!" The number of "you're absolutely right"s per session is roughly how I measure model performance (inverse correlation).
I like to agree as sorta yes but also really no because it's a rubber ducky that doesn't give you the chance to come to your own conclusion and even if it does it has you questioning it.
i find its the opposite, LLMs can be made to agree with anything.... largely because that agreeability is in their system prompt
Yeah, this. Every conversation inevitably ends with "you're absolutely right!" The number of "you're absolutely right"s per session is roughly how I measure model performance (inverse correlation).
Ha, touche!