I love this because it demystifies the inner-workings of AI. At its most atomic level, it’s really all just conditional statements and branching logic.

What makes you think so? We are talking about wrappers people can write around LLMs.

That has nothing to do with AIs in general. (Nor even with just using a single LLM.)