did AI explain its thinking, or could it have just stumbled upon the solution without designing it or understanding why it worked? i.e. could it have just been a hallucination that happened to work?
did AI explain its thinking, or could it have just stumbled upon the solution without designing it or understanding why it worked? i.e. could it have just been a hallucination that happened to work?
This is a great question! By analyzing the logs of OpenEvolve with the full model outputs, we observed how the AI got its ideas (seemed to be pulling from literature in the space) and how it tried to apply them. So in some sense, it "reasoned" about how to get better algorithms. And we saw this process proceed systematically via the ADRS framework to converge to a significantly better algorithm
Can you confirm if this generated code is the same as https://arxiv.org/pdf/2402.02447 ?
very interesting, thank you.