These abstractions are nice to not get locked in with one llm provider - but like with langchain - once you use some more niche feature the bugs do shine through. I tried it out with structured output for azure openai but had to give up since somewhere somewhat was broken and it's difficult to figure out if it's the abstraction or the library of the llm provider which the abstraction uses.
Nevertheless i would strongly recommend to not use directly the libraries of the ai providers as you get quickly locked in a extremely fast paced market where today's king can change weekly.
Pydantic AI maintainer here! Did you happen to file an issue for the problem you were seeing with Azure OpenAI?
The vast majority of bugs we encounter are not in Pydantic AI itself but rather in having to deal with supposedly OpenAI Chat Completions-compatible APIs that aren't really, and with local models ran through e.g. Ollama or vLLM that tend to not be the best at tool calling.
The big three model providers (OpenAI, Claude, Gemini) and enterprise platforms (Bedrock, Vertex, Azure) see the vast majority of usage and our support for them is very stable. It remains a challenge to keep up with their pace of shipping new features and models, but thanks to our 200+ contributors we're usually not far behind the bleeding edge in terms of LLM API feature coverage, and as you may have seen we're very responsive to issues and PRs on GitHub, and questions on Slack.
Thanks for working on pydantic-ai. I digged up the issue - it seems to have been fixed with the recent releases related to how strictness is handled.
In this example, you get locked into pydantic_ai, another proprietary provider.
How do you mean? Pydantic AI (which I'm a maintainer of) is completely open source.
We do have a proprietary observability and evals product Pydantic Logfire (https://pydantic.dev/logfire), but Pydantic AI works with other observability tools as well, and Logfire works with other agent frameworks.
thanks for clarifying. I guess my comment was more directed to the fact that pydantic, the company is 1) VC backed 2) Unclear how/when/what you will monetize 3) how that will affect the open source stuff.
I strongly believe you guys should be compensated very well for what you bring to the ecosystem but the probability of open source projects being enshittified by private interests is non-trivially high.
I work at Pydantic and while the future is obviously unpredictable I can vow for all of us in that we do not intend to ever start charging for any of our open source things. We’ve made a very clear delineation between what is free (pydantic, pydantic-ai, the logfire SDK, etc) and what is a paid product (the Logfire SaaS platform). Everything open source is liberally licensed such that no matter the fate of the company it can be forked. Even the logfire SDK, the thing most integrated to our commercial offering, speaks OTLP and hence you can point it at any other provider, basically no lock in.
I appreciate that and honestly I never doubt the employees, or perhaps even the founders. When looking into the future, Its the investors that are not to be trusted and they call the shots commensurate to their ownership stake - which is again opaque to us in the case of pydantic.
And taking this 1 step further, its not that investors are evil people who want to bad things, but its their explicit job to make returns on their investment - its the basic mechanisms of idiom "show me an incentive and i'll show you the outcome"