Doubt if you can make a dumb model smart by feeding it proofs

https://www.promptingguide.ai/techniques/knowledge

Sohnds like a great way to fill up the context before you even start.

Yes, what's your point? That is literally what it does - it adds relevant knowledge to the prompt before generating a response, in order to ground it me effectively.

My point is that this doesn't scale. You want the LLM to have knowledge embedded in its weights, not prompted in.

It scales fine if done correctly.

Even with the weights the extra context allows it to move to the correct space.

Much the same as humans there are terms that are meaningless without knowing the context.