I’m not really sure what you’re getting at. Could you point to some papers exemplifying the kind of work that you’re thinking of? Of course there are lots of people training LLMs and other statistical models on EEG data, but that does not show that, say, GPT-5, is a good model of any aspect of human cognition.

Chomsky, of course, never attempted to model the generation of natural language and was interested in a different set of problems, so LLMs are not really a competitor in that sense anyway (even if you take the dubious step of accepting them as scientific models).

I certainly don’t agree with Norvig, but he doesn’t really understand the basics of what Chomsky is trying to do, so there is not much to respond to. To give three specific examples, he (i) is confused in thinking that Gold’s theorem has anything to do with Chomsky’s arguments, (ii) appears to think that Chomsky studied the “generation of language” (because he he’s read so little of Chomsky’s work that he doesn’t know what a “generative grammar” is), and (iii) believes that Chomsky thinks that natural languages are formal languages in which every possible sentence is either in the language or not (again because he’s barely read anything that Chomsky wrote since the 1950s). Then, just to make absolutely sure not to be taken seriously, he compares Chomsky to Bill O’Reilly!

On point (iii), see http://www.linguistics.berkeley.edu/~syntax-circle/syntax-gr..., and the last complete paragraph of p. 145.