This struck me as a strawman argument against the "stochastic parrot" interpretation. I really disagree with this premise in particular: "if a language model was just a stochastic parrot, when we looked inside to see what was going on, we’d basically find a lookup table." I'm not sure how the latter follows from the former at all.
As someone else pointed out, I think there's deep philosophical issues about intelligence and consciousness underlying all this and I'm not sure it can be resolved this way. In some sense, we all might be stochastic parrots — or rather, I don't think the problem can be waved away without deeper and more sophisticated treatments on the topic.