> We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.
Agents, tool-integrated reasoning, even chain of thought (limited, for some math) can address this.
> We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.
Agents, tool-integrated reasoning, even chain of thought (limited, for some math) can address this.
You're both completely missing the point. It's important that an LLM be able to perform exact arithmetic reliably without a tool call. Of course the underlying hardware does so extremely rapidly, that's not the point.
Could you explain why that is?
A tool call is like 100,000,000x slower isnt it?
No idea really, but if it is speed related I would have thought that OP would have used faster rather than importance to try and make their point.
It's both. Being dirrctly a part of it makes it integrated into its intelligence for training and operation.
The computer ALREADY does do math reliably. You are missing the point.