and in the end, these chain of LLM reduces down to a series of human written if-else statements listing out the conditions of acceptable actions. Some might call it a...decision tree!

I love this because it demystifies the inner-workings of AI. At its most atomic level, it’s really all just conditional statements and branching logic.

What makes you think so? We are talking about wrappers people can write around LLMs.

That has nothing to do with AIs in general. (Nor even with just using a single LLM.)