> Imagine a programming language where statements are suggestions and functions return “Success” while hallucinating. Reasoning becomes impossible; reliability collapses as complexity grows.

This is essentially declarative programming. Most traditional programming is imperative, what most developers are used to - I give the exact set of instructions and expect them to be obeyed as I write them. Agents are way more declarative than imperative - you give them a result, they work on getting that result. Now the problem of course, is in something declarative like say, SQL, this result is going to be pretty consistent and well-defined, but you're still trusting the underlying engine on how to go about it.

Thinking about agents declaratively has helped me a lot rather than to try to design these rube-goldberg "control" systems around them. Didn't get it right? Ok, I validated it's not correct, let's try again or approach it differently.

If you really need something imperative, then write something imperative! Or have the agent do so. This stuff reads like trying to use the wrong tool for the job.

> This is essentially declarative programming.

I think it's step more-abstract that that, we're doing... How about "narrative programming"? (Though we could debate whether "programming" is still an applicable word.)

Yes, it may look like declarative programming, but it's within an illusion: We aren't aren't actually describing our goals "to" an AI that interprets them. Instead, there's a story-document where our human stand-in character has dialogue to a computer-character, and up in the real world we're hoping that the LLM will append more text in a way that makes a cohesive longer story with something useful that can be mined from it.

It's not just an academic distinction, if we know there's a story, that gives us a better model for understanding (and strategizing) the relationship between inputs and outputs. For example, it helps us understand risks like prompt-injection, and it provides guidance for the kinds of training data we do (or don't) want it trained on.

I dont hate that distinction, I just think a lot of people are approaching this from an imperative framework that might not fit.

I was thinking of declarative, but PROLOG rather than SQL. So with actual control flow and reasoning capabilities.

And then you run into similar issues as the llm does, like silent failures, loops, contradictions unless you're very careful.

The essence might be the same closed world assumption problem. In llm case this manifests as hallucination rather that admitting it does not know.

I agree. But you can speak imperatively to agents as well ("Here are specific steps; follow them") and they can still screw up. :) I think what you're looking for is determinism, not imperativism.

And to your point: instructing a (non-deterministic) LLM declaratively ("get me to this end state") compounds the likelihood of going off the rails.

I don’t think I’m confusing the two but it is an issue. See another comment I made in a sibling comment - terraform is a great example or something that is declarative, and also non deterministic. You can’t control upstream api/provider changes even between two plans happening simultaneously - thats a lot what working with agents feels like to me.

SQL's declarativeness is also based on the mathematics of relational algebra, so it will return the same result every time. Will it return it in the same amount of time every single query? No, that depends on indexing and database size. But the query itself won't be altered in the same way an LLM would be.

Engines that use SQL can vary drastically in how they handle strings, floating points, etc., where identical SQL queries on identical data absolutely can return different results, which is why I mentioned the engine underneath - LLM's being nondeterministic in addition to declarative is kind of tangential to the point I was trying to make.

It is the same in terraform - yes, the HCL spec defines things very precisely, but you're kind of at the mercy of how the provider and provider API decide how to handle what you wrote, which can be very messy and inconsistent even when nothing changed on your side at all. LLM/agent usage feels a lot like that to me, in the sense it's declarative and can be a bit lossy. As a result there are things I could technically do in terraform but would never, because I need imperativeness.

My main point being, I think people are trying to ram agents into a ton of cases where they might not necessarily need or even want to be used, and stuff like this gets written. Maybe not, but I see it day to day - for instance, I have a really hard time convincing coworkers that are complaining about the reliability of MCP responses with their agents, that they could simply take an API key, have the agent write a script that uses it, and strictly bound/define the type of response format they want, rather than let the agent or server just guess - for some reason there is some inclination to "let the agent decide how to do everything."

I think that's probably what this article is getting at, but, I am saying that trying to create these elaborate control flows with validation checks everywhere to reign in an unruly application making dumb decisions, why not just use it to write deterministic automation instead of using agent as the automation?