Philosophers have been arguing a parallel point for centuries. Does intelligence require some sort of (ostensibly human-ish) qualia or does “if it quacks like a duck, it is a duck” apply?

I think it's better to look at large language models in the context of Wittgenstein. Humans are more than next token predictors because we participate in “language games” through which we experimentally build up a mental model for what each word means. LLMs learn to “rule follow” via a huge corpus of human text but there’s no actual intelligence there (in a Wittgensteinian analysis) because there’s no “participation” beyond RLHF (in which humans are playing the language games for the machine). There’s a lot to unpack there but that’s the gist of my opinion.

Until we get some rigorous definitions for intelligence or at least break it up into many different facets, I think pie in the sky philosophy is the best we can work with.