Language capability is a bit difficult to quantify, but LLMs know tens of languages, and many of those better than at least the vast majority of even native humans at least grammar- and vocabulary-wise. They also encode magnitudes more fact-type knowledge than any human being. My take is that language isn't that hard but humans just kinda suck at it, like we suck at arithmetic and chess.
There sure is some "inductive bias" in the anatomy of the brain to develop things like language but it could be closer to how transformer architectures differ from pure MLPs.
The argument was for decades that no generic system can learn language from input alone. That turned out flat wrong.