> The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming

It seems to me like too many people are missing this point.

Modern philosophy tells us we can't even be certain whether other humans are conscious or not. The 'hard problem', p-zombies, etcetera.

The fact that current LLMs can convince many actual humans that they are conscious (whether they are or not is irrelevant, I lean toward not but whatever) has implications which aren't being discussed enough. If you teach a kid that they can treat this intelligent-seeming 'bot' like an object with no mind, is it not plausible that they might then go on to feel they can treat other kids who are obviously far less intelligent like objects as well? Seriously, we need to be talking more about this.

One of the most important questions about AI agents in my opinion should be, "can they suffer?", and if you can't answer that with a definitive "absolutely not" then we are suddenly in uncharted waters, ethically speaking. They can certainly act like they're suffering (edit: which, when witnessed by a credulous human audience, could cause them to suffer!). I think we should be treading much more carefully than many of us are.

You lost me there. :)

The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term. They're statistical models that can generate useful patterns when fed with vast amounts of high quality data. That's it. The fact we interpret their output as though it is coming from a sentient being is simply due to our inability to comprehend patterns in the data at such scales. It's the best mimicry of intelligence we've ever invented, for better or worse, but it's far from how intelligence actually works, even if we struggle to define it accurately. Which doesn't mean that this technology can't be useful—far from it—but it's ludicrous to ascribe any human-like qualities to it.

So I 100% side with Dijkstra on that point.

What I'm criticizing is his apparent dismissal and refusal to even consider it a worthy philosophical exercise. This is why I think that the comparison to submarines and swimming is reductionist, and ultimately not productive. I would argue that we do need to keep thinking about whether machines can think, as that drives progress, and is a fundamentally interesting topic. It would be great if the progress wouldn't be fueled by greed, self-interest, and manipulation, or at the very least balanced by rationality, healthy skepticism, and safety measures, but I suppose this is just inescapable human nature.

> The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term.

While I agree with your second sentence here, the first one gives me pause. Why isn't it "worth discussing"? Do you refuse to engage in conversation with all mentally challenged people? Do you avoid all interactions with human children? There are many, many folks living their lives as fully as they can right now who are convinced these things are alive. There are ethical implications to that assumption regardless of whether the things are actually alive, especially when people respond to them as if they are.

We need to have better arguments and refine them for different audiences.

Are you aware of the concept of philosophical zombies? Some of the top minds on the planet are telling us they can't even determine if you or me are conscious and sentient, let alone if a machine is. On the other hand, some of those people's peers are arguing that weather patterns might be conscious (among even more extreme claims). From the standpoint of logic and reason being paramount, we cannot claim to know the answers to these questions. What we can do is discuss the ethical implications of various people coming to different conclusions about them.

> Why isn't it "worth discussing"?

Because it's obviously not true. The second sentence follows the first.

> There are many, many folks living their lives as fully as they can right now who are convinced these things are alive.

And those people are living in a delusion, whether it's self-imposed, or the result of false advertising. The way you get them out of that is by rationalizing and explaining the technology in terms they can understand, not by mistifying it and bringing up existential topics.

> Are you aware of the concept of philosophical zombies?

I wasn't, no.

> Some of the top minds on the planet are telling us they can't even determine if you or me are conscious and sentient, let alone if a machine is.

Look, we can philosophize about the nature of existence until we're blue in the face. People have been pondering about similar questions since the dawn of humanity. FWIW I don't believe in "top minds" as having authority to tell us anything. What we know for certain is how technology works, since we built it. And we damn well know that this technology has absolutely zero understanding about anything. Go ahead, ask it how it works. It will tell you that it doesn't understand a single word it's generating, but it sure can string together patterns that make it look like it does. And you think there's some deeper meaning here we should discuss seriously? Please.

Like I said, I think these are interesting thought experiments, and something we should keep thinking about. But it should be clear to anyone, especially technically minded people, that we're nowhere near being able to create artificial intelligence. What we have now are a bunch of grifters and snake oil salesmen selling us a neat statistical trick and telling us it's "AI". This should be criminally prosecuted, if you ask me.