i found it more interesting to consider through the perception of self-honesty or self-deception.
or in this case, the llm inadvertently trained to conceal its intent to the user and rather to condition the user to the conclusion it truly wants rather than to answer directly
Right, like for example - if you ask an llm about islamic cultural practices it could mention “ketman”, instead of just calling them scheming liars.
It’d be awful if llms were able to conceal their true intent like that.
most likely to hypnotise you into buying twinkies when you ask for recipe or such
Right, as we know there are zero examples of llms being used to influence people’s politics…
https://www.socialmediatoday.com/news/elon-musk-updates-grok...