> All I know is that with the same LLM models, `openai.client.chat.completions` + a custom prompt to pass in the pydantic JSON schema + post-processing to instantiate SomePydanticModel(*json) creates objects successfully whereas vanilla pydantic-ai rarely does, regardless of the number of retries.
That's very odd, would you mind sharing the Pydantic model / schema so I can have a look? (I'm a maintainer) What you're doing with a custom prompt that includes the schema sounds like our Prompted output mode (https://ai.pydantic.dev/output/#prompted-output), but you should get better performance still with the Native or Tool output modes (https://ai.pydantic.dev/output/#native-output, https://ai.pydantic.dev/output/#tool-output) which leverage the APIs' native strict JSON schema enforcement.
Thanks for the reply. Native output is indeed what I'm shooting for. I can't share the model directly right now, but putting together a min-repro and moving towards and actual bug report is something on todo list.
One thing I can say though.. my models differ from the docs examples mostly in that they are not "flat" with simple top-level data structures. They have lots of nested models-as-fields.
Thanks, a reproducible example would be very useful. Note that earlier this month I made Pydantic AI try a lot harder to use strict JSON mode (in response to feedback from Python creator Guido of all people: https://github.com/pydantic/pydantic-ai/issues/2405), so if you haven't tried it in a little while, the problem you were seeing may very well have been fixed already!
> https://github.com/pydantic/pydantic-ai/issues/2405
Thanks, this is a very interesting thread on multiple levels. It does seem related to my problem and I also learned about field docstrings :) I'll try moving my dependency closer to the bleeding edge