i cant wait for the world to catch up to process, session, et al. calculii. the closest i’ve seen is all this “choreo” stuff that is floating around nowadays, which is pretty neat in itself.
> Next up, LLMs as actors & processes in π-calculus.
You jest, but agents are of course already useful and fairly formal primitives. Distinct from actors, agents can have things like goals/strategies. There's a whole body of research on multi-agent systems that already exists and is even implemented in some model-checkers. It's surprising how little interest that creates in most LLM / AI / ML enthusiasts, who don't seem that motivated to use the prior art to propose / study / implement topologies and interaction protocols for the new wave of "agentic".
Ten years ago at my old university we had a course called Multi-Agent Systems. The whole year built up to it: a course in Formal Logic with Prolog, Logic-Based AI (LBAI) with a robot in a block world, also with Prolog, and finally Multi-Agent Systems (MAS).
In the MAS course, we used GOAL, which was a system built on top of Prolog. Agents had Goals, Perceptions, Beliefs, and Actions. The whole thing was deterministic. (Network lag aside ;)
The actual project was that we programmed teams of bots for a Capture The Flag tournament in Unreal Tournament 3.
So it was the most fun possible way to learn the coolest possible thing.
The next year they threw out the whole curriculum and replaced it with Machine Learning.
--
The agentic stuff seems to be gradually reinventing a similar setup from first principles, especially as people want to actually use this stuff in serious ways, and we lean more in the direction of determinism.
The main missing feature in LLM land is reliability. (Well, that and cost and speed. Of course, "just have it be code" gives you all three for free ;)
I have an example from 2023, when Auto-GPT (think OpenClaw but with GPT-3.5 and early GPT-4 — yeah it wasn't great!) was blowing up.
Most people were just using it for the same task. "Research this stuff and summarize it for me."
I realized I could get the same result in by just, do a Google search, scrape top 10 results and summarize them.
Except it runs in 10 seconds instead of 10 minutes. And it actually runs deterministically instead of getting side tracked and going in infinite loops and burning 100x as much money.
Regardless of whether it's framed as old-school MAS or new-school agentic AI, it seems like it's an area that's inherently multi-disciplinary where it's good to be humble. You do see some research that's interested in leveraging the strengths of both (e.g. https://www.nature.com/articles/s41467-025-63804-5.pdf) but even if news of that kind of cross pollination was more common, we should go further. Pleased to see TFA connecting agentic AI to amdahls law for example.. but we should be aggressively stealing formalisms from economics, game theory, etc and anywhere else we can get them. Somewhat related here is the camel AI mission and white papers: https://www.camel-ai.org/
i cant wait for the world to catch up to process, session, et al. calculii. the closest i’ve seen is all this “choreo” stuff that is floating around nowadays, which is pretty neat in itself.
Is it web scale?
Abstractly? 100%. Realistically? Depends on how many trillions we can get from investors.
> Next up, LLMs as actors & processes in π-calculus.
You jest, but agents are of course already useful and fairly formal primitives. Distinct from actors, agents can have things like goals/strategies. There's a whole body of research on multi-agent systems that already exists and is even implemented in some model-checkers. It's surprising how little interest that creates in most LLM / AI / ML enthusiasts, who don't seem that motivated to use the prior art to propose / study / implement topologies and interaction protocols for the new wave of "agentic".
Ten years ago at my old university we had a course called Multi-Agent Systems. The whole year built up to it: a course in Formal Logic with Prolog, Logic-Based AI (LBAI) with a robot in a block world, also with Prolog, and finally Multi-Agent Systems (MAS).
In the MAS course, we used GOAL, which was a system built on top of Prolog. Agents had Goals, Perceptions, Beliefs, and Actions. The whole thing was deterministic. (Network lag aside ;)
The actual project was that we programmed teams of bots for a Capture The Flag tournament in Unreal Tournament 3.
So it was the most fun possible way to learn the coolest possible thing.
The next year they threw out the whole curriculum and replaced it with Machine Learning.
--
The agentic stuff seems to be gradually reinventing a similar setup from first principles, especially as people want to actually use this stuff in serious ways, and we lean more in the direction of determinism.
The main missing feature in LLM land is reliability. (Well, that and cost and speed. Of course, "just have it be code" gives you all three for free ;)
I have an example from 2023, when Auto-GPT (think OpenClaw but with GPT-3.5 and early GPT-4 — yeah it wasn't great!) was blowing up.
Most people were just using it for the same task. "Research this stuff and summarize it for me."
I realized I could get the same result in by just, do a Google search, scrape top 10 results and summarize them.
Except it runs in 10 seconds instead of 10 minutes. And it actually runs deterministically instead of getting side tracked and going in infinite loops and burning 100x as much money.
It was like 30 lines of Python.
Regardless of whether it's framed as old-school MAS or new-school agentic AI, it seems like it's an area that's inherently multi-disciplinary where it's good to be humble. You do see some research that's interested in leveraging the strengths of both (e.g. https://www.nature.com/articles/s41467-025-63804-5.pdf) but even if news of that kind of cross pollination was more common, we should go further. Pleased to see TFA connecting agentic AI to amdahls law for example.. but we should be aggressively stealing formalisms from economics, game theory, etc and anywhere else we can get them. Somewhat related here is the camel AI mission and white papers: https://www.camel-ai.org/
Could it just be that it is happening behind closed doors due to multi agents being part of the secret sauce of post training LLMs.
That's all nice & well but which protocol & topology will deliver the most dollars from investors?
That’s easy: the Torment Nexus.
That's topologically the same as the pyramid of torment & seems to me it's already saturated w/ lots of VC dollars.