> best solved with what used to be called symbolic AI before it started working
Right, the current paradigm of requiring an LLM to do arbitrary digit multiplication will not work and we shouldn’t need to. If your task is “do X” and it can be reliably accomplished with “write a python program to do X” that’s good enough as far as I’m concerned. It’s preferable, in fact.
Btw Chollet has said basically as much. He calls them “stored programs” I think.
I think he is onto something. The right atomic to approach these problems is probably not the token, at least at first. Higher level abstraction should be refined to specific components, similar to the concept of diffusion.
As soon as the companies behind these systems stop marketing them as do-anything machines, I will stop judging them on their ability to do everything.
The ChatGPT input field still says ‘Ask anything’, and that is what I shall do.
You can ask me anything. I don’t see that as a promise that I am infallible.
Pricing Schedule
__________________
Answers: $1
Thoughtful Answers: $5
Correct Answers: $50
Dumb Looks are Free
> that’s good enough as far as I’m concerned
But in that case, why an LLM. If we want Question-Answer machines to be reliable, they must have the skills which include "counting" just as a basic example.
The purpose of the LLM would be to translate natural language into computer language, not to do the calculation itself.
Most human ten year olds in school can add two large numbers together. If a connectionist network is supposed to model the human brain, it should be able to do that. Maybe LLMs can do a lot of things, but if they can't do that, then they're an incomplete model of the human brain.
If I were to guess, most (adult) humans could not add two 3 digit numbers together with 100% accuracy. Maybe 99%? Computers can already do 100%, so we should probably be trying to figure out how to use language to extract the numbers from stuff and send them off to computers to do the calculations. Especially because in the real world most numbers that matter are not just two digits addition
Artificial neural nets are pretty far from brains. We don’t use them because they are like brains, we use them because they can approximate arbitrary functions given sufficient data. In other words, they work.
For what it’s worth, people are also pretty bad at math compared to calculators. We are slow and error prone. That’s ok.
What I was (poorly) trying to say is that I don’t care if the neural net solves the problem if it can outsource it to a calculator. People do the same thing. What is important is reliably accomplishing the goal.
Most human ten year olds can add two large numbers together with the aid of a scratchpad and a pen. You need tools other than a single dimensional vector of text to do some of these things.
No LLM or other modern AI architecture I'm aware of is supposed to model the human brain. Even if they were, LLMs can add large numbers with the level of skill I'd expect from a 10 year old:
----
What's 494547645908151+7640745309351279642?
ChatGPT said: The sum of 494,547,645,908,151 and 7,640,745,309,351,279,642 is:
7,641,239,857,997,187,793
----
(7,641,239,856,997,187,793 is the correct answer)
I tried it on gpt-4-turbo and it seems to give the right answer:
>Let's calculate:494,547,645,908,151+7,640,745,309,351,279,642=7,641,239,856,997,187,793 >494,547,645,908,151+7,640,745,309,351,279,642=7,641,239,856,997,187,793 >Answer: 7,641,239,856,997,187,793