If you need authority over someone to persuade them then your argument isn't compelling enough, either because your reasoning is flawed or you're not communicating it well enough.

In the case in the article the author believed they were the expert, and believed their wife should accept their argument on that basis alone. That isn't authority; that's ego. They were wrong so they clearly weren't drawing on their expertise or they weren't as much of an expert as they thought, which often happens if you're talking about a topic that's only adjacent to what you're an expert in. This is the "appeal to authority" logical fallacy. It's easy to believe you're the authority in question.

...we’ve allowed AI to become the authority word, leaving the rest of us either nodding along or spending our days explaining why the confident answer may not survive contact with reality.

The AI aspect is irrelevant. Anyone could have pointed out the flaw in the author's original argument, and if it was reasoned well enough he'd have changed his mind. That's commendable. We should all be like that instead of dogmatically holding on to an idea in the face of a strong argument against it. The fact that argument came from some silicon and a fancy random word generator just shows how cool AI is these days. You still have to question what it's saying though. The point is that sometimes it'll be right. And sometimes it won't. Deciding which it is lies entirely with us humans.

What article did you read? That's not what happened in the story. They turned to the AI for a third opinion, and it was disturbingly persuasive.

Yeah, I share this same sentiment about the author. His inability to see why his wife was actually upset by his behavior is astounding. I'm going to wager that she very much meant to say that she thought he would be incessantly argumentative with the AI as well, and when he wasn't it surprised her because she suddenly realized that it was only her that he was like that with and it became very personal for her.

I think that part may have been misunderstood. My wife wasn’t upset about me being argumentative, her point was that it bothered her I was convinced by the AI, rather than by her, and what stuck with me is how quickly I was convinced by AI and stopped dead in my tracks. That is why I wrote the piece.

The difference I was trying to highlight isn’t that AI was “right,” but how confidently it answered, and how quickly that persuaded me.

If my wife had made the same arguments in the same polished way, I probably would’ve caved just as fast. But she didn’t, AI did... and what struck me wasn’t the answer, it was how fast my own logic switched off, as if I’d been wrong all along.

That’s what feels new to me, sitting in a meeting for hours while a non-tech person confidently tells execs how “AI will solve everything”, and everyone nods along. The risk isn’t just being wrong, it’s when expertise gets silenced by convincing answers, and stops to ask the right questions.

Again, this is my own reflection and experience, others may not feel this way. Thanks for your comment.

What was the name? Naming things is highly subjective. In the abstract, sure abdicating responsibility to anyone else in the room, an AI, Google, your partner, the CEO, the investors; someone else having the authority when you're used to it being you stings a little the first time, but you get used to not always being right eventually.

Agreed, creativity is very subjective. My point wasn’t about who was right or wrong. What unsettled me was how, the moment AI gave its opinion, my own questioning and reasoning almost instantly disappeared.

That’s really what the piece was about, how quickly I found myself giving up my own judgment to AI.

those AI tools are designed to please: if you ask them: is this a good idea? they will always say yes.

what would be better is to ask: give me three good arguments for this, and then: give me three good arguments against this, and finally compare the arguments without asking the AI tool which is better.

> If you need authority over someone to persuade them then your argument isn't compelling enough, either because your reasoning is flawed or you're not communicating it well enough.

I question that assertion. The other party has to be willing to engage too.

Nailed it. This guy has a humility (and probably also critical thinking/communication) problem, not an AI one.

> and if it was reasoned well enough he'd have changed his mind.

In my experience, motivated reasoning rules the day. People have an agenda beyond their reasoning, and if your proposal goes against that agenda, you'll never convince them with logic and evidence. At the end of the day, it's not a marketplace of ideas, but a war of conflicting interests. To convince someone requires not the better argument, but the better politics to make their interests align with yours. And in AI there are a lot of adverse interests you're going to be hard pressed to overcome.

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

Well said, AI definitely amplifies agendas, and the lure of “bigger, better, more profitable” usually beats the status quo.