Yes, I have had times where I realised after a while that my proposed approach would never actually work because of some overlooked high-level issue, but the LLM never spots that kind of thing and just happily keeps trying.

Maybe that's a good thing - if it could think that well, what would I be contributing?