We understand and build the trellis that the LLMs "grow" on. We don't have good insight into how a fully grown LLM actually turns any specific input into any specific output. We can follow it through the network, but it's a totally senseless noisy mess.
"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.
(This is an illustrative example made for easy understanding, not something I specifically went and compared)
We have the Navier–Stokes equations which fit on a matchbox, yet for the last 25 years there's been a US$1,000,000 prize on offer to the first person providing a solution for a specific statement of the problem:
Prove or give a counter-example of the following statement:
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
I don't see how it will convince anyone: people said as much before chess, then again about Go, and are still currently disagreeing with each other if LLMs do or don't pass the Turing test.
Irregardless, this was to demonstrate by analogy that things that seem simple can actually be really hard to fully understand.
The above is a video clip of Hinton basically contradicting what you’re saying.
So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.
So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.
We understand and build the trellis that the LLMs "grow" on. We don't have good insight into how a fully grown LLM actually turns any specific input into any specific output. We can follow it through the network, but it's a totally senseless noisy mess.
"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.
(This is an illustrative example made for easy understanding, not something I specifically went and compared)
We don't know the path for how a given input produces a given output, but that doesn't mean we don't know how LLMs work.
We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.
We have the Navier–Stokes equations which fit on a matchbox, yet for the last 25 years there's been a US$1,000,000 prize on offer to the first person providing a solution for a specific statement of the problem:
And when that prize is claimed, we'll ring the bell on AGI being found. Gentleman's agreement.
I don't see how it will convince anyone: people said as much before chess, then again about Go, and are still currently disagreeing with each other if LLMs do or don't pass the Turing test.
Irregardless, this was to demonstrate by analogy that things that seem simple can actually be really hard to fully understand.
https://youtu.be/qrvK_KuIeJk?t=284
The above is a video clip of Hinton basically contradicting what you’re saying.
So thats my elaboration. Picture that you just said what you said to me to hintons face. I think it’s better this way because I noticed peoples responding to me are rude and completely dismiss me and I don’t get good faith responses and intelligent discussion. I find if people realize that there statements are contradictory to the statements of the industry and established experts they tend to respond more charitably.
So please respond to me as if you just said to hintons face that what he said is utter nonsense because what I said is based off of what he said. Thank you.