Not definitively. LLMs are stochastic with respect to input, temperature and the exact prompt. It's possible that the model was already capable of it but never received the exact right conditions to produce this output.
Not definitively. LLMs are stochastic with respect to input, temperature and the exact prompt. It's possible that the model was already capable of it but never received the exact right conditions to produce this output.
Every model is able to solve each problem, given the right prompt. (Worst case, the prompt contains the solution.)
Interesting... Exhaustive brute force prompting might expose previously unknown capabilities in existing models. Seems like a whole can of worms.
Exhaustive brute force prompting is completely unfeasible. The number of potential prompts is impossibly large.