This is a neat idea but there are so many variables here that it's hard to make generalizations.

Empirically, a top level orchestrator that calls out to a planning committee, then generates a task-dag from the plan which gets orchestrated in parallel where possible is the thing I've seen put in the best results in various heterogeneous environments. As models evolve, crosstalk may become less of a liability.

Reasoning is recursive - you cannot isolate where is should be symbolic and where it should be llm based (fuzzy/neural). This is the idea that started https://github.com/zby/llm-do - there is also RLM: https://alexzhang13.github.io/blog/2025/rlm/ RLM is simpler - but my approach also have some advantages.

I think the AI community is sleeping hard on proper symbolic recursion. The computer has gigabytes of very accurate "context" available if you start stacking frames. Any strategy that happens inside token space will never scale the same way.

Depth first, slow turtle recursion is likely the best way to reason through the hardest problems. It's also much more efficient compared to things that look more like breadth first search (gas town).

I only agree with that statement if you're drawing from the set of all possible problems a priori. For any individual domain I think it's likely you can bound your analytic. This ties into the no free lunch theorem.

Computers are finite - but we use an unbounded model for thinking about them - because it simplifies many things.