> Star Us on GitHub and Get Exclusive Day 1 Badge for Your Networks
This made me close the tab.
Stars have been gamed for awhile on GitHub, but given the single demo, my best guess is that this is trying to build hype before having any real utility.
Milvus is following the playbook that they've been following for years- integrate with and boost any framework or product that they can to maintain the appearance of a use-case.
I can somewhat answer this to best of my knowledge.
Right now, businesses communicate with REST Apis.
That is why we have API gateways like AWS Gateway, Apigee, WSO2 (company i used to work in), Kong, etc so businesses can securly deploy and expose APIS.
As LLMS gets better, the idea is we will evenutally move to a world where ai agents do most of business tasks. And businesses will want to expose ai agents instead of APIS.
This is where protocols like a2a comes in. Google partnering with some other giants introduced a2a protocol a while ago, it is now under linux foundation.
It is a standard for one agent to talk to another agent regardless of the framework (langchain, crewai etc) that is used to build the agent.
A major reason agentic LLMs are so promising right now is because they just Figure It Out (sometimes).
Either the AI can figure it out, and it doesn't matter if there is a standardized protocol. Or the AI can't figure it out, and then it's probably a bad AI in the first place (not very I).
The difference between those two possibilities is a chasm far too wide to be bridged by the simple addition of a new protocol.
Having A2A is much more efficient and less error prone. Why would I want to spend tons of token on an AI „figuring it out“, if I can have the same effect for less using A2A?
we can even train the LLMs with A2A in mind, further increasing stability and decreasing cost.
A human can also figure everything out, but if I come across a well engineered REST API with standard oauth2 , I am productive within 5 minutes.
I mean, I wrote bots to play MMORPGs when I was a teen/kid, but really, what's the point? Aren't games there to be enjoyed by things that can have experiences?
Maybe I interpreted it differently, but playing an RPG where every NPC is essentially its own agent/AI with its own context would be very interesting. In most RPGs, NPCs are very static.
Can someone please explain what this means? I'm familiar with agentic development workflows but have no clue what this means and what I can do with it?
Is it something like n8n, to connect agents with some work flow and let the work flow do stuff for me?
In the late 90s and early 2000s there was a bunch of academic research into collaborative multi-agent systems. This included things like communication protocols, capability discovery, platforms, and some AI. The classic and over-used example was travel booking -- a hotel booking agent, a flight booking agent, a train booking agent, etc all collaborating to align time, cost, location. The cooperative agents could add themselves and their capabilities to the agent community and the potential of the system as a whole would increase, and there would perhaps be cool emergent behaviours that no one had thought of.
This appears, to me, like an LLM-agent descendent of these earlier multi-agent systems.
I lost track of the research after I left academia -- perhaps someone here can fill in the (considerable) blanks from my overview?
As someone who got into ongoing multi-agent systems (MAS) research relatively recently (~4 years, mostly in distributed optimization), I see two major strands of it. Both of which are certainly still in search of the magical "emergence":
There is the formal view of MAS that is a direct extension of older works with cooperative and competitive agents. This tries to model and then prove emergent properties rigorously. I also count "classic" distributed optimization methods with convergence and correctness properties in this area. Maybe the best known application of this are coordination algorithms for robot/drone swarms.
Then, as a sibling comment points out, there is the influx of machine learning into the field. A large part of this so far was multi-agent reinforcement learning (MARL). I see this mostly applied to any "too hard" or "too slow" optimization problem and in some cases they seem to give impressive results.
Techniques from both areas are frequently mixed and matched for specific applications. Things like agents running a classic optimization but with some ML-based classifications and local knowledge base.
What I see actually being used in the wild at the moment are relatively limited agents, applied to a single optimization task and with frequent human supervision.
More recently, LLMs have certainly taken over the MAS term and the corresponding SEO. What this means for the future of the field, I have no idea. It will certainly influence where research funding is allocated.
Personally, I find it hard to believe LLMs would solve the classic engineering problems (speed, reliability, correctness) that seem to hold back MAS in more "real world" environments. I assume this will instead push research focus into different applications with higher tolerance for weird outputs. But maybe I just lack imagination.
Maybe this article can help you. It mentions the multi-agent research boom back in the 1990s. Later, reinforcement learning was incorporated, and by 2017, industrial-scale applications of multi-agent reinforcement learning were even achieved. Neural networks were eventually integrated too. But when LLMs arrived, they upended the entire paradigm. The article also breaks down the architecture of modern asynchronous multi-agent systems, using Microsoft's Magentic One as a key example.
https://medium.com/@openagents/the-end-of-a-15-year-marl-era...
openagents aims to build agent networks with "open" ecosystems, many agent systems these days are centered around workflows, but workflow is possible when you already know what kinds of agents will be there in your team. But when you allow any agent to join/leave a network, the workflow concept breaks, so this project helps developers to build a ecosystem for open collaboration.
This looks great. Open-source work in multi-agent systems is still quite fragmented, so having an A2A-compatible framework feels very useful.
A question: how difficult would it be to plug in custom agent personalities or domain-specific tools?
If you have a roadmap or examples, I’d love to see them.
Hey, I still remember October 9th so well — that was the day we first went public with our project! I was so excited telling all my friends about it on social media. We'd been working towards this for months, getting everything ready.
Maybe it's malware, I haven't checked, but that seems like a pretty typical trajectory to me. I posted a project on HN and got a graph of roughly the same shape (though a much more modest magnitude). https://www.star-history.com/#maxbondabe/attempt&type=date&l...
Star counts go vertical when you launch your project and it's warmly received. ~850 stars in 11 days for an AI project doesn't seem at all crazy to me.
The README also contains a mild inducement to star the repo.
> Star Us on GitHub and Get Exclusive Day 1 Badge for Your Networks
Seems sufficient to explain any inauthentic behavior. Growth hacking tactics are certainly not typical of open source projects, but how that should factor into your judgment of this project's trustworthiness, I can't say. Caveat emptor.
Thank you! That really means a lot. Making A2A work seamlessly was a key goal for us. We can't wait to see what kind of networks and collaborations people start building.
> Star Us on GitHub and Get Exclusive Day 1 Badge for Your Networks
This made me close the tab.
Stars have been gamed for awhile on GitHub, but given the single demo, my best guess is that this is trying to build hype before having any real utility.
If being thorough is wrong, then what exactly is right?
Yeah I have to say that and the comments in this thread make me feel very ... directed.
There are already people using this in many applications, there is a new one coming out today https://x.com/milvusio/status/1991170853795709397?s=20
Milvus is following the playbook that they've been following for years- integrate with and boost any framework or product that they can to maintain the appearance of a use-case.
Good to see a2a getting more attention.
If you are a rustacean, We are building something in the a2a space as well. Tho we don't have sudden increase in stars :/
https://github.com/agents-sh/radkit
A2A is hard tto ignore, especially for anyone working on multi-agent systems.
Nice , time to learn Rust!
I've been looking at all of the agent talk this past year with an open mind.
But I still do not know what a real use case for these would be (and don't say a travel agent). What is the point of these swarms of agents?
Can someone enlighten me?
While I am not familiar with OPs project,
I can somewhat answer this to best of my knowledge.
Right now, businesses communicate with REST Apis.
That is why we have API gateways like AWS Gateway, Apigee, WSO2 (company i used to work in), Kong, etc so businesses can securly deploy and expose APIS.
As LLMS gets better, the idea is we will evenutally move to a world where ai agents do most of business tasks. And businesses will want to expose ai agents instead of APIS.
This is where protocols like a2a comes in. Google partnering with some other giants introduced a2a protocol a while ago, it is now under linux foundation.
It is a standard for one agent to talk to another agent regardless of the framework (langchain, crewai etc) that is used to build the agent.
Can't you just put the agent behind a REST API and give the other agents a curl tool + doc?
You can.
Everyone will have their own versions of the rest endpoints, their own version of input params, and lots and lots of docs scatterd.
A standard, will help the ecosystem grow. Tooling, libraries etc.
A major reason agentic LLMs are so promising right now is because they just Figure It Out (sometimes).
Either the AI can figure it out, and it doesn't matter if there is a standardized protocol. Or the AI can't figure it out, and then it's probably a bad AI in the first place (not very I).
The difference between those two possibilities is a chasm far too wide to be bridged by the simple addition of a new protocol.
I think that‘s a bit shortsighted.
Having A2A is much more efficient and less error prone. Why would I want to spend tons of token on an AI „figuring it out“, if I can have the same effect for less using A2A? we can even train the LLMs with A2A in mind, further increasing stability and decreasing cost.
A human can also figure everything out, but if I come across a well engineered REST API with standard oauth2 , I am productive within 5 minutes.
We are working on an RPG game tailored for agents :) releasing soon.
I mean, I wrote bots to play MMORPGs when I was a teen/kid, but really, what's the point? Aren't games there to be enjoyed by things that can have experiences?
Maybe I interpreted it differently, but playing an RPG where every NPC is essentially its own agent/AI with its own context would be very interesting. In most RPGs, NPCs are very static.
I don't think devs can answer that one, you'll have to ask VCs
[dead]
Can someone please explain what this means? I'm familiar with agentic development workflows but have no clue what this means and what I can do with it? Is it something like n8n, to connect agents with some work flow and let the work flow do stuff for me?
In the late 90s and early 2000s there was a bunch of academic research into collaborative multi-agent systems. This included things like communication protocols, capability discovery, platforms, and some AI. The classic and over-used example was travel booking -- a hotel booking agent, a flight booking agent, a train booking agent, etc all collaborating to align time, cost, location. The cooperative agents could add themselves and their capabilities to the agent community and the potential of the system as a whole would increase, and there would perhaps be cool emergent behaviours that no one had thought of.
This appears, to me, like an LLM-agent descendent of these earlier multi-agent systems.
I lost track of the research after I left academia -- perhaps someone here can fill in the (considerable) blanks from my overview?
As someone who got into ongoing multi-agent systems (MAS) research relatively recently (~4 years, mostly in distributed optimization), I see two major strands of it. Both of which are certainly still in search of the magical "emergence":
There is the formal view of MAS that is a direct extension of older works with cooperative and competitive agents. This tries to model and then prove emergent properties rigorously. I also count "classic" distributed optimization methods with convergence and correctness properties in this area. Maybe the best known application of this are coordination algorithms for robot/drone swarms.
Then, as a sibling comment points out, there is the influx of machine learning into the field. A large part of this so far was multi-agent reinforcement learning (MARL). I see this mostly applied to any "too hard" or "too slow" optimization problem and in some cases they seem to give impressive results.
Techniques from both areas are frequently mixed and matched for specific applications. Things like agents running a classic optimization but with some ML-based classifications and local knowledge base. What I see actually being used in the wild at the moment are relatively limited agents, applied to a single optimization task and with frequent human supervision.
More recently, LLMs have certainly taken over the MAS term and the corresponding SEO. What this means for the future of the field, I have no idea. It will certainly influence where research funding is allocated. Personally, I find it hard to believe LLMs would solve the classic engineering problems (speed, reliability, correctness) that seem to hold back MAS in more "real world" environments. I assume this will instead push research focus into different applications with higher tolerance for weird outputs. But maybe I just lack imagination.
Maybe this article can help you. It mentions the multi-agent research boom back in the 1990s. Later, reinforcement learning was incorporated, and by 2017, industrial-scale applications of multi-agent reinforcement learning were even achieved. Neural networks were eventually integrated too. But when LLMs arrived, they upended the entire paradigm. The article also breaks down the architecture of modern asynchronous multi-agent systems, using Microsoft's Magentic One as a key example. https://medium.com/@openagents/the-end-of-a-15-year-marl-era...
openagents aims to build agent networks with "open" ecosystems, many agent systems these days are centered around workflows, but workflow is possible when you already know what kinds of agents will be there in your team. But when you allow any agent to join/leave a network, the workflow concept breaks, so this project helps developers to build a ecosystem for open collaboration.
Agents in C# seem much better than in Python or Typescript, wish we could see more frameworkers take that route.
This looks great. Open-source work in multi-agent systems is still quite fragmented, so having an A2A-compatible framework feels very useful.
A question: how difficult would it be to plug in custom agent personalities or domain-specific tools? If you have a roadmap or examples, I’d love to see them.
Hi, we are working on a feature allowing someone to quickly write and launch an agent into the network with zero code (just configuration).
Example config: https://github.com/openagents-org/openagents/blob/develop/ex...
We are doing the final testing, and this feature should be working very soon.
Definitely not malware: https://www.star-history.com/#openagents-org/openagents&type...
Hey, I still remember October 9th so well — that was the day we first went public with our project! I was so excited telling all my friends about it on social media. We'd been working towards this for months, getting everything ready.
Maybe it's malware, I haven't checked, but that seems like a pretty typical trajectory to me. I posted a project on HN and got a graph of roughly the same shape (though a much more modest magnitude). https://www.star-history.com/#maxbondabe/attempt&type=date&l...
Star counts go vertical when you launch your project and it's warmly received. ~850 stars in 11 days for an AI project doesn't seem at all crazy to me.
The README also contains a mild inducement to star the repo.
> Star Us on GitHub and Get Exclusive Day 1 Badge for Your Networks
Seems sufficient to explain any inauthentic behavior. Growth hacking tactics are certainly not typical of open source projects, but how that should factor into your judgment of this project's trustworthiness, I can't say. Caveat emptor.
That doesn’t necessarily mean it is malware. Is it not possible that they just paid for some kind of PR or fake stars?
Just playing devils advocate as I think your accusation isn’t based on much merit and is quite a big claim to make.
If you've tried it, you know why everyone's so happy to star it.
Does anyone have any good resources on A2A in general?
https://a2a-protocol.org/latest/ is the best place to start.
[dead]
Nice work — making multi-agent networks A2A-compatible in an open-source framework looks very promising.
Thank you! That really means a lot. Making A2A work seamlessly was a key goal for us. We can't wait to see what kind of networks and collaborations people start building.
[dead]
Fancy logo, has a website, sudden rise in stars.
Checks all the boxes of open-source software that's waiting for enshitification.
Genuine question: do you know what that word means?
[dead]