LLMs are smarter in hindsight than going forward, sort of like humans! only they don't have such flexible self reflection loops so they tend to fall into local minima more easily.