We know quite a lot. For example, we know that brains have various different nueromodulatory pathways. Take for example dopamine reward mechanism that is being talked about more openly these day. Dopamine is literally secreted by various different parts of the brain and affect different pathways.
I don't think it is anywhere feasible to emulate anything resembling this in a computational neural network with fixed input and output neurons.
Keep in mind that our brains also have a great deal of built in trained structure from evolution. So even if we understood exactly how a brain learns, we may still not be able to replicate it if we can't figure out the highly optimized initial state from which it starts in a fetus.
The amount of information transmitted from one generation to the next is potentially much more than the contents of DNA. DNA is not an encoding of every detail of a living body, it is a set of instructions for a living body to create an approximate copy of itself. You can't use DNA, as far as we know, to create a new organism from scratch to create a new organism without having the parent organism around to build it.
We do know for certain that many parts of a cell divide separately from the nucleus and have no relation to the DNA of the cell - most well known being the mitochondria, which have their own DNA, but also many organelles just split off and migrate to the new cell quasi-independently. And this is just the simplest layer in some of the simplest organisms - we have no idea whatsoever how much other information is transmitted from the parent organism to the child in ways other than DNA.
In particular in mammals, we have no idea how actively the mother's body helps shape the child. Of course, there's no direct neuron to neuron contact, but that doesn't mean that the mother's body can't contribute to aspects of even the fetal brain development in other ways.
Interesting. As you say, that certainly makes sense for mammala. But I'd be interested in knowing what mechanisms you might conjecture for birds, where pretty much all foetal development happens inside the egg, separated from the mother -- or fish, or octopuses.
I concur. It might not be feasible in terms of computational power available, but I don't think there is anything fundamentally stopping application of those training mechanisms, unless the whole neuralnet paradigm is fundamentally incompatible with those learning methods.
How much of, especially "higher level cognition" like language, is encoded genetically is highly controversial and the thinking/pendulum in last decade or two has shifted substantially towards only general mechanisms being innate. E.g. the cortex may be in an essentially "random state" prior to getting input.
That's why I qualified all of my statements with "may" and "might". Still, I think it's extraordinarily unlikely that human brains could turn out, for example, to have no special bias for learning language. The training algorithm in our brains would have to be soany orders of magnitude better than the state of the art in ANNs that it would boggle the mind.
Consider the comparison with LLM training. A state of the art LLM that is, say, only an order of magnitude better than an average 4 year old human child in language use is trained on ~all of the human text ever produced, consuming many megawatts of power in the process. And it's helped with plenty of pre-processing of this text information, and receives virtually no noise.
In contrast, a human child that is not deaf acquires language from a noisy enviroment with plenty of auditory stimuli from which they first have to even understand that they are picking up language. To be able to communicate and thus receive significant feedback on the learning, they also have to learn how to control a very complex set of organs (tongue, lips, larynx, chest muscles), all with many degrees of freedom and precise timing needed to produce any sound whatsoever.
And yet virtually all human children learn all of this in a matter of 12-24 months, consuming, say, and then spend another 2-3 years learning more language without struggling as much with the basics of word recognition and pronunciation. And they do all this while consuming a total of some 5kWh, and this includes many bodily processes that are not directly related to language acquisition, and a lot of direct physical activity too.
So, either we are missing something extremely fundamental, or the initial state of the brain is very, very far from random and much of this was actually trained over tens or hundreds of thousands of years of evolution of the hominids.
Language capability is a bit difficult to quantify, but LLMs know tens of languages, and many of those better than at least the vast majority of even native humans at least grammar- and vocabulary-wise. They also encode magnitudes more fact-type knowledge than any human being. My take is that language isn't that hard but humans just kinda suck at it, like we suck at arithmetic and chess.
There sure is some "inductive bias" in the anatomy of the brain to develop things like language but it could be closer to how transformer architectures differ from pure MLPs.
The argument was for decades that no generic system can learn language from input alone. That turned out flat wrong.
We know quite a lot. For example, we know that brains have various different nueromodulatory pathways. Take for example dopamine reward mechanism that is being talked about more openly these day. Dopamine is literally secreted by various different parts of the brain and affect different pathways.
I don't think it is anywhere feasible to emulate anything resembling this in a computational neural network with fixed input and output neurons.
Dopamine is not permanent, though. We're talking about long-term synaptic plasticity, not short-term neurotransmitter modulation.
Dopamine modulates long term potentiation and depression, in some complicated way.
Aren't we already emulating it? It's sort of a distributed and overlaid reward function, which we just undistributed
Keep in mind that our brains also have a great deal of built in trained structure from evolution. So even if we understood exactly how a brain learns, we may still not be able to replicate it if we can't figure out the highly optimized initial state from which it starts in a fetus.
Presumably that is limited by the gig or so of information in our DNA, though?
The amount of information transmitted from one generation to the next is potentially much more than the contents of DNA. DNA is not an encoding of every detail of a living body, it is a set of instructions for a living body to create an approximate copy of itself. You can't use DNA, as far as we know, to create a new organism from scratch to create a new organism without having the parent organism around to build it. We do know for certain that many parts of a cell divide separately from the nucleus and have no relation to the DNA of the cell - most well known being the mitochondria, which have their own DNA, but also many organelles just split off and migrate to the new cell quasi-independently. And this is just the simplest layer in some of the simplest organisms - we have no idea whatsoever how much other information is transmitted from the parent organism to the child in ways other than DNA.
In particular in mammals, we have no idea how actively the mother's body helps shape the child. Of course, there's no direct neuron to neuron contact, but that doesn't mean that the mother's body can't contribute to aspects of even the fetal brain development in other ways.
Interesting. As you say, that certainly makes sense for mammala. But I'd be interested in knowing what mechanisms you might conjecture for birds, where pretty much all foetal development happens inside the egg, separated from the mother -- or fish, or octopuses.
I concur. It might not be feasible in terms of computational power available, but I don't think there is anything fundamentally stopping application of those training mechanisms, unless the whole neuralnet paradigm is fundamentally incompatible with those learning methods.
How much of, especially "higher level cognition" like language, is encoded genetically is highly controversial and the thinking/pendulum in last decade or two has shifted substantially towards only general mechanisms being innate. E.g. the cortex may be in an essentially "random state" prior to getting input.
That's why I qualified all of my statements with "may" and "might". Still, I think it's extraordinarily unlikely that human brains could turn out, for example, to have no special bias for learning language. The training algorithm in our brains would have to be soany orders of magnitude better than the state of the art in ANNs that it would boggle the mind.
Consider the comparison with LLM training. A state of the art LLM that is, say, only an order of magnitude better than an average 4 year old human child in language use is trained on ~all of the human text ever produced, consuming many megawatts of power in the process. And it's helped with plenty of pre-processing of this text information, and receives virtually no noise.
In contrast, a human child that is not deaf acquires language from a noisy enviroment with plenty of auditory stimuli from which they first have to even understand that they are picking up language. To be able to communicate and thus receive significant feedback on the learning, they also have to learn how to control a very complex set of organs (tongue, lips, larynx, chest muscles), all with many degrees of freedom and precise timing needed to produce any sound whatsoever.
And yet virtually all human children learn all of this in a matter of 12-24 months, consuming, say, and then spend another 2-3 years learning more language without struggling as much with the basics of word recognition and pronunciation. And they do all this while consuming a total of some 5kWh, and this includes many bodily processes that are not directly related to language acquisition, and a lot of direct physical activity too.
So, either we are missing something extremely fundamental, or the initial state of the brain is very, very far from random and much of this was actually trained over tens or hundreds of thousands of years of evolution of the hominids.
Language capability is a bit difficult to quantify, but LLMs know tens of languages, and many of those better than at least the vast majority of even native humans at least grammar- and vocabulary-wise. They also encode magnitudes more fact-type knowledge than any human being. My take is that language isn't that hard but humans just kinda suck at it, like we suck at arithmetic and chess.
There sure is some "inductive bias" in the anatomy of the brain to develop things like language but it could be closer to how transformer architectures differ from pure MLPs.
The argument was for decades that no generic system can learn language from input alone. That turned out flat wrong.
Yet for example the auditory/language processing part is almost always located in the same region for all humans.
E.g. ear input is connected to the same cortical location in almost all humans.
Didn't they get neurons in a petri dish to fly a flight simulator?