A major reason agentic LLMs are so promising right now is because they just Figure It Out (sometimes).
Either the AI can figure it out, and it doesn't matter if there is a standardized protocol. Or the AI can't figure it out, and then it's probably a bad AI in the first place (not very I).
The difference between those two possibilities is a chasm far too wide to be bridged by the simple addition of a new protocol.
Having A2A is much more efficient and less error prone. Why would I want to spend tons of token on an AI „figuring it out“, if I can have the same effect for less using A2A?
we can even train the LLMs with A2A in mind, further increasing stability and decreasing cost.
A human can also figure everything out, but if I come across a well engineered REST API with standard oauth2 , I am productive within 5 minutes.
You can.
Everyone will have their own versions of the rest endpoints, their own version of input params, and lots and lots of docs scatterd.
A standard, will help the ecosystem grow. Tooling, libraries etc.
A major reason agentic LLMs are so promising right now is because they just Figure It Out (sometimes).
Either the AI can figure it out, and it doesn't matter if there is a standardized protocol. Or the AI can't figure it out, and then it's probably a bad AI in the first place (not very I).
The difference between those two possibilities is a chasm far too wide to be bridged by the simple addition of a new protocol.
I think that‘s a bit shortsighted.
Having A2A is much more efficient and less error prone. Why would I want to spend tons of token on an AI „figuring it out“, if I can have the same effect for less using A2A? we can even train the LLMs with A2A in mind, further increasing stability and decreasing cost.
A human can also figure everything out, but if I come across a well engineered REST API with standard oauth2 , I am productive within 5 minutes.