>And the reality is that confirmation is part of life.
Sycophantic agreement certainly is, as is lying, manipulation, abuse, gaslighting.
Those aren't the good parts of life.
Those aren't the parts I want the machine to do to people on a mass scale.
>You may even struggle to stay married if you don't learn to confirm your wife's perspectives.
Sorry what?
The important part is validating the way someone feels, not "confirming perspectives".
A feeling or a perspective can be valid ("I see where you're coming from, and it's entirely reasonable to feel that way"), even when the conclusion is incorrect ("however, here are the facts: ___. You might think ___ because ____, and that's reasonable. Still, this is how it is.")
You're doing nobody a favor by affirming they are correct in believing things that are verifiably, factually false.
There's a word for that.
It's lying.
When you're deliberately lying to keep someone in a relationship, that's manipulation.
When you're lying to affirm someone's false views, distorting their perception of reality - particularly when they have doubts, and you are affirming a falsehood, with intent to control their behavior (e.g. make them stay in a relationship when they'd otherwise leave) -
... - that, my friend, is gaslighting.
This is exactly what the machine was doing to the colleague who asked "which of us is right, me or the colleague that disagrees with me".
It doesn't provide any useful information, it reaffirms a falsehood, it distorts someone's reality and destroys trust in others, it destroys relationships with others, and encourages addiction — because it maximizes "engagement".
I.e., prevents someone from leaving.
That's abuse.
That, too is a part of life.
>I agree with your conclusion, but that's by design
All I did was named the phenomena we're talking about (lying, gaslighting, manipulation, abuse).
Anyone can verify the correctness of the labeling in this context.
I agree with your assertion, as well as that of the parent comment. And putting them together we have this:
LLM chatbots today are abusive by design.
This shit needs to be regulated, that's all. FDA and CPSC should get involved.