It’s an extremely difficult problem, and if you know how to do that you could be a billionaire.
It’s not impossible, obviously—humans do it—but it’s not yet certain that it’s possible with an LLM-sized architecture.
It’s an extremely difficult problem, and if you know how to do that you could be a billionaire.
It’s not impossible, obviously—humans do it—but it’s not yet certain that it’s possible with an LLM-sized architecture.
> It’s not impossible, obviously—humans do it
It's still not at all obvious to me that LLMs work in the same way as the human brain, beyond a surface level. Obviously the "neurons" in neural nets resemble our brains in a sense, but is the resemblance metaphorical or literal?
https://www.youtube.com/watch?v=l-OLgbdZ3kk