> At 100 billion neurons, we know, for example, that compositional language of the kind we humans use is possible. At the 100 million or so neurons of a cat, it doesn’t seem to be.
The implication here that presupposes neuron count generally scales ability is the latest in a long line of extremely questionable lines of thought from mr wolfram. I understand having a blog, but why not separate it from your work life with a pseudonym?
> In a rough first approximation, we can imagine that there’s a direct correspondence between concepts and words in our language.
How can anyone take anyone who thinks this way seriously? Can any of us imagine a human brain that directly related words to concepts, as if "run" has a direct conceptual meaning? He clearly prefers the sound of his own voice compared to how his is received by others. That, or he only talks with people who never bothered to read the last 200 years of european philosophy. Which would make sense given his seeming adoration of LLMs.
There's a very real chance that more neurons would hurt our health. Perhaps our brain is structured in a way to maximize their use and minimize their cost. It's certainly difficult to justify brain size as a super useful thing (outside of my big-brained human existence) looking at the evolutionary record.