Nature has already set an absurdly high bar. The human brain runs on roughly 20 watts, yet delivers a level of intelligence we still can't clearly define, let alone replicate. Nothing we've built comes close... either in capability or efficiency. We're still very early in understanding what "intelligence" even means, much less engineering it. so, we have a long way to go, and push.
Not sure what you mean by efficiency as this was part of the article and I understand things differently—can you clarify? For the energy of 20 W in an hour on a laptop’s M4 pro, this model produces about 200k tokens (a book or two) at a typical electricity cost of less than a third of a US cent. Although clearly the intelligence of this particular model is unrelated to human intelligence, I always thought that there is no comparison between LLMs and humans in terms of efficiency: these models are way less energy expensive than humans. If you were to use data center scale optimizations, then serving LLMs is many additional orders of magnitude more efficient than serving LLMs at home. (The energy cost of inference on the M4 pro and iphone are listed in the article.)
Depending on how you convert synapse count to parameters, the brain also has something like a thousand trillion parameters. In that light it's pretty darn surprising that an artificial neural network can produce anything like coherent text.
Maybe the brain is more akin to a network of networks and the actual reasoning part is not all that large? There are lots of areas dedicated exclusively to processing input and controlling subsystems. I can imagine a future where large artificial networks work in a similar way, with multiple smaller ones connected to each other.
It indeed is. We now have models less than 100M params producing pretty coherent, and somewhat relevant text to give input. That is indeed impressive.
I believe the answer lies in how "quickly" (and how?) we are able to learn, and then generalize those learnings as well. As of now, these models need millions (at least) examples to learn, and are still not capable of generalizing the learnings to other domains. Human brains hardly need a few, and then, they generalize those pretty well.
A 1980's desk calculator can multiply two 8 digit numbers with much less energy than your brain takes to do the same.
Modern LLM's similarly beat the human brain in lots of tasks for energy efficiency - mostly by the fact the LLM can produce the answer in 1 second and the brain has to spend half an hour researching and drafting something.
> Nothing we've built comes close... either in capability or efficiency.
Only when you look at stuff that the brain is specifically good at.
You can surpass the brain with even simple mechanical adders or an abacus in certain subdomains.
General intelligence I mean. What calculations even need to be performed and when, still comes from our brains.