Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems.

To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://www.petrostechchronicles.com/) https://github.com/Aherontas/Pycon_Greece_2025_Presentation_...

The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration.

Features:

- Multiple agents running in containers

- MCP servers (Brave search, GitHub, filesystem, etc.) as tools

- A2A communication between services

- Minimal UI for experimentation for Tech Trend - repo analysis

I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases.

It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions:

Do you think agent-to-agent protocols like MCP/A2A will stick?

Or will the future be mostly single powerful LLMs with plugin stacks?

Thanks — excited to hear what the HN crowd thinks!

[deleted]

Since you're framing this as a learning resource, here are a couple things I see:

Your views are not following a single convention: some of them return dictionaries, some return base JSONResponse objects, and others return properly defined Pydantic schemas. I didn't run the code, but I'd venture to guess your generated documentation is not comprehensive, nor is it cohesive.

I'd also further extend this into your agent services; passing bare dictionaries with arbitrary fields into what is supposed to be a modular logic handler is pretty outdated. You're defining a functional (methods) interface; data structures are the other half of the equation.

This plays into the way that Agents (as in the context of this system, versus Pydantic AI agents) are wrapped arbitrarily. I'd favor making the conversion from a Pydantic agent to a native agent part of the system's interface design, rather than re-implementing a subset of the agentic functions in your own BaseAgent and ending up with an `agent.agent` context.

Also, since this is a web-centric application (that leverages agents) dropping all of your view functions into main.py leaves something to be desired; break up your views into logical modules based on role.

Everyone's learning, and I hope this helps someone in their journey. Kudos for putting your code out there as a resource; some of us can't help ourselves from reading it.

Thanks a lot for taking the time to go through the code and leave such a detailed comment, I really appreciate the thoughtful feedback either bad or good, it always helps.

Just to clarify: this repo is meant as a quick demo/learning resource after experimenting with Pydanticai(for some limited time), FastAPI, and agents. It’s not something I’d structure or ship in a production environment or at a real job in any case.

That said, your points are spot on:

- Consistency in return types (Pydantic schemas over mixed dicts/JSONResponse, which was done either because I glued code from other projects I had either from generated so if it is used from anyone in real case it needs refactoring).

Structuring data exchange between agents with typed models instead of raw dicts. (totally correct too)

Avoiding redundant abstractions in the agent base. (I wouldn't agree fully on that as it is an area that anyone can have different opinion on what is a redudant abstraction)

Breaking views into logical modules rather than dropping them all into main.py. (I fully agree again)

These are all best practices I’d absolutely follow in production code and more as the codebase is not 100% structured robustly, and it’s great to see them highlighted here so others reading can also learn from the contrast between “demo” and “real-world” implementations.

Again, thanks for diving in this kind of feedback is exactly what makes sharing experiments valuable.

Do you have any recommendations for articles or example projects of what a good Python project (that isn't django based) looks like in 2025? Seeing things like pydantic derived types leak everywhere seems wrong from my Java background.

Ooh, great question. I don't have a good link. I would say that most of the concepts that I'm expressing come from personal experience and an interest in optimizing my own codebases for maintainability.

Pydantic is often misunderstood, and developers who aren't familiar with typesafe-python love to try to raise criticisms of it. But the way that you should think about it is that it's essentially a replacement for a built-in type system in the context of data like a dataclass. However, Pydantic takes it further by giving you serialization and deserialization that is customizable and has integrations with, for example, SQL Alchemy where you can serialize directly from your ORM. One of the major benefits that I find is that it provides common, repeatable interfaces for validation or data formatting.

Essentially, it has become incredibly popular because it provides a consistent interface that developers understand for accomplishing these common patterns.

When it comes to "leaking" derived types through things like OpenAPI specs and documentation, they don't really expose the underlying object's functionality, but they do expose the object's structure so that you can easily generate documentation that includes the expected response bodies and expected return bodies. Whether those get serialized into JSON or something else, the parameters and types and optionality of each of those is formally defined by Pydantic in a way that's straightforward for the documentation generation to interpret.

In most cases you'll disable the generated documentation links from FastAPI in production.

Exactly this. From SQL Alchemy to pydantic model, from pydantic model to pydantic dto, from pydantic dto to json/protobuf/binary to ship over the wire…

Okay, so seeing pydantic types used in every part of an app not just the API layer is normal.

Yes. It’s the only way to ensure typing in a dynamic typed world. You can attrs, you can validate, but pydantic does it all.

pydantic types are designed to be shipped. Just make sure you strip any security stuff or PII. Pydantic and JSON work very very well together.

This is a great repo, I was looking for something like this! I believe MCP and A2A will probably remain and others will build on top of them actually

Thanks a lot, I believe too that MCP and A2A are not going anywhere anytime soon. Although both of them need more features and security hardening in order to really survive in production like or enterprise environments.

I've been very happy with pydantic-ai, it blows the rest of the python ai ecosystem out of the water

Are you using Pydantic AI for structured output? If so, have you also tried instructor?

`outlines` (https://github.com/dottxt-ai/outlines) is very good and supported by vLLM as a backend structured output provider (https://docs.vllm.ai/en/v0.8.2/features/structured_outputs.h...) for both local and remote LLMs. vLLM is probably the best open source tooling for the inference side right now.

I did look at instructor and probably for structured output pydantic-ai and instructor are about the same, but pydantic-ai supports a ton of other stuff that isn't part of instructor's feature set. For me killer apps were the ability to serialize/deserialize conversations as json; frictionless tool-calling; and ability to mock the LLM client for testing.

I personally just switched to https://docs.boundaryml.com/guide/comparisons/baml-vs-pydant...

just feels a bit more polished. especially testing part.

Haven't tried balm, I will give it a shot, I am really curious to see all the features it supports.

[deleted]

Without using all the jargon can you kindly explain what your project actually does because I did read the repo and still dont get it

It just builds a ChatGPT like UI that has Agents to serve you either latest news on a specific tech topic (via fetching them via MCP Servers for: Brave Browser, Hackernews and Github for trending repos or repos fetched in the response). It also supports giving you back general knowledge answers. The whole purpose of the repo is to be a general example of how Agents speak with Agents and how to pair them with MCP servers and why we need MCP Servers(for example better run all calculations not in LLM because they hallucinate, but via code and then give the answer back to the LLM). I hope it clarifies what it does now.

Looks interesting! Thanks for posting it.

Would be great if you can add the slides or video of the presentation to the repo. Maybe also add a description and update the summary at the top?

It seems like the project is a multi-agent playground & demo to learn how to make AI agents work together?

Hello exactly what you are saying is what this repo does. I will upload today probably the whole PyCon presentation of it in my blog if you are interested. You will find it at: https://www.petrostechchronicles.com/

Can't really grasp much from the repo, the slides are needed

That's correct, the slides are mostly showing how MCP - A2A work and basic architectures between them and Agents in general. I will upload probably today the presentation slides in my blog.