Hi HN! I built Oxyde because I was tired of duplicating my models.

If you use FastAPI, you know the drill. You define Pydantic models for your API, then define separate ORM models for your database, then write converters between them. SQLModel tries to fix this but it's still SQLAlchemy underneath. Tortoise gives you a nice Django-style API but its own model system. Django ORM is great but welded to the framework.

I wanted something simple: your Pydantic model IS your database model. One class, full validation on input and output, native type hints, zero duplication. The query API is Django-style (.objects.filter(), .exclude(), Q/F expressions) because I think it's one of the best designs out there.

Explicit over implicit. I tried to remove all the magic. Queries don't touch the database until you call a terminal method like .all(), .get(), or .first(). If you don't explicitly call .join() or .prefetch(), related data won't be loaded. No lazy loading, no surprise N+1 queries behind your back. You see exactly what hits the database by reading the code.

Type safety was a big motivation. Python's weak spot is runtime surprises, so Oxyde tackles this on three levels: (1) when you run makemigrations, it also generates .pyi stub files with fully typed queries, so your IDE knows that filter(age__gte=...) takes an int, that create() accepts exactly the fields your model has, and that .all() returns list[User] not list[Any]; (2) Pydantic validates data going into the database; (3) Pydantic validates data coming back out via model_validate(). You get autocompletion, red squiggles on typos, and runtime guarantees, all from the same model definition.

Why Rust? Not for speed as a goal. I don't do "language X is better" debates. Each one is good at what it was made for. Python is hard to beat for expressing business logic. But infrastructure stuff like SQL generation, connection pooling, and row serialization is where a systems language makes sense. So I split it: Python handles your models and business logic, Rust handles the database plumbing. Queries are built as an IR in Python, serialized via MessagePack, sent to Rust which generates dialect-specific SQL, executes it, and streams results back. Speed is a side effect of this split, not the goal. But since you're not paying a performance tax for the convenience, here are the benchmarks if curious: https://oxyde.fatalyst.dev/latest/advanced/benchmarks/

What's there today: Django-style migrations (makemigrations / migrate), transactions with savepoints, joins and prefetch, PostgreSQL + SQLite + MySQL, FastAPI integration, and an auto-generated admin panel that works with FastAPI, Litestar, Sanic, Quart, and Falcon (https://github.com/mr-fatalyst/oxyde-admin).

It's v0.5, beta, active development, API might still change. This is my attempt to build the ORM I personally wanted to use. Would love feedback, criticism, ideas.

Docs: https://oxyde.fatalyst.dev/

Step-by-step FastAPI tutorial (blog API from scratch): https://github.com/mr-fatalyst/fastapi-oxyde-example

Why would one want to couple these two? Doesn't that couple, say, your API interface with your database schema? Whereas in reality these are separate concepts, even if, yes, sometimes you return a 'user' from an API that looks the same as the 'user' in the database? Honest question, I only just recently got into FastAPI and I was a bit confused at first that yes, it seemed like a lot of duplication, but after a little bit of experience, they are different things that aren't always the same. So what am I missing?

The ORM doesn't force you to use the DB model as your API schema. It's a regular Pydantic BaseModel, so you can make separate request/response schemas whenever you need to. For simple CRUD, using the model directly saves boilerplate. For complex cases, you decouple them as usual. The goal is not one model for everything, it's one contract. Everything speaks Pydantic, whether it's your API layer or your database layer.

You might be interested in a library that flips around the concept of an ORM, like sqlc [1] or aiosql [2]. You specify just what you want from the database, without needing to couple so much API with database tables.

[1] https://sqlc.dev/ [2] https://nackjicholson.github.io/aiosql/

Nice! But what about type checking?

I think they should be related but not the same.

With proper DRY, you shouldn't need to repeat the types, docstrings and validation rules. You should be able to say "these fields from user go on its default presenter, there are two other presenters with extra fields" without having to repeat all that info. I have a vague memory that Django Rest Framework does something of the sort.

This feels hard to express in modern type systems though. Perhaps the craziness that is Zod and Typescript could do it, but probably not Python.

Yeah I have always struggled to figure out why I would use SQLModel.

Big fan of FastAPI but I think SQLModel leads to the wrong mental model that somehow db model and api schema are the same.

Therefore I insist on using SQLAlchemy for db models and pydantic for api schemas as a mental boundary.

[deleted]

I also find it really strange this weird ORM fascination. Besides the generic "ORM are the Vietnam War of CS" feeling, I feel that with average database to ORM/REST things, end up with at least one of:

a) you somehow actually have a "system of record", so modelling something in a very CRUD way makes sense, but, on the other hand, who the hell ends up building so many system of record systems in the first place to need those kinds of tools and frameworks?

b) you pretend your system is somehow a system of record when it isn't and modelling everything as CRUD makes it a uniform ball of mud. I don't understand what is so important that you can uniformly "CRUD" a bunch of objects? The three most important parts of an API, making it easy to use it right, hard to use it wrong, and easy to figure out the intent behind how your system is meant to be used, are lost that way.

c) you leak your API to your database, or your database to your API, compromising both, so both sucks.

The vietnam of computer science was written 20 years ago (2006 even), and didn't kill off ORMs then. We've only had 20 years of improvement of ORMs since then. We've long ago accepted Vietnam (the country) as what it is and what it will be in the forseeable future. We should do the same with ORM.

I for one don't want to write in a low level assembly language, and shouldn't have to in 2026. Yet, SQL still feels like one.

I've written a lot of one off products using an ORM, and I don't regret any of the time savings from doing so. When and if I make $5-50M a year on a shipped product, okay, maybe I'll think about optimizing. And then I'll hire an expert while I galavant around europe.

SQL is a pretty high-level, declarative language. It's unnecessarily wordy though, and not very composable.

The problem with ORMs is that they usually give you a wrong abstraction. They map poorly on how a relational database works, and what it is capable of. But the cost of it is usually poor performance, rarely it's obvious bugs. So it's really easy to get started to use; when it starts costing you, you can optimize the few most critical paths, and just pay more and more for the DB. This looks like an okay deal for the industry, it seems.

A lot of people have this misconception that Pydantic models are only useful for returning API responses. This is proliferated by all-in-one frameworks like FastAPI.

They are useful in and of themselves as internal business/domain objects, though. For a lot of cases I prefer lighter weight dataclasses that have less ceremony and runtime slowdown with the validation layer, but we use Pydantic models all over the place to guarantee and establish contracts for objects flowing thru our system, irrespective of external facing API layer.

One advantage of something like attrs/cattrs is you can use the regular attrs objects inside the module without all the serialisation/validation baggage, then use cattrs to do the serialisation at the edge where you need it.

Pydantic models are validating. Hence their natural place is interfaces with external systems, outside the reach of your typechecker.

dataclasses can do validation too and are faster

Dataclasses can skip validation, hence they are faster, when you can trust the inputs.

With this you don't need to have two sources of truth on the backend. Previously you will end up with one set of DB ORM level validation and a second set of validation on the data transfer object and you have to make sure the types line up. Now the data transfer object will automatically inherit the correct types.

The two sources of truth are for two disparate adapters. Neither the API nor the DB define the domain, but both must adapt the domain to their respective implementation concerns.

The ontology of your persistence layer should be entirely uncoupled from that of your API, which must accommodate an external client.

In theory, anyway.

I'd rather define my db domain in-code so I do not have to worry about writing the queries without type hinting.

Raw sql -> eh, I don't have the patience Raw sql w type hints -> better, I at least get a compilation error when I do something wrong, and preferably a suggestion ORM -> this usually introduces it's own abstraction that needs it's own maintenance, but at least it's more code-oriented.

Yes, SQL is an awesome solution to querying DB's, hence I prefer option 2

Exactly, that's the idea.

Lol I never knew django orm is faster than SQLAlchemy. But having used both that makes sense.

> Why Rust? ... Rust handles the database plumbing. Queries are built as an IR in Python, serialized via MessagePack, sent to Rust which generates dialect-specific SQL, executes it, and streams results back. Speed is a side effect of this split, not the goal.

Nice.

So what does it take to deploy this, dependency wise?

> Lol I never knew django orm is faster than SQLAlchemy.

I don’t believe that for a second. Both are wonderful projects, but raw performance was never one of Django ORM’s selling points.

I think its real advantage is making it easy to model new projects and make efficient CRUD calls on those tables. Alchemy’s strong point is “here’s an existing database; let users who grok DB theory query it as efficiently and ergonomically as possible.”

I was surprised too when I saw the results. The benchmarks test standard ORM usage patterns, not the full power of any ORM. SQLAlchemy is more flexible, but that flexibility comes with some overhead. That said, the ORM layer is rarely the bottleneck when working with a database. The benchmarks were more about making sure that all the Pydantic validation I added comes for free, not about winning a speed race.

Just pip install oxyde, that's it. The Rust core (oxyde-core) ships as pre-built wheels for Linux, macOS, and Windows, so no Rust toolchain needed. Python-side dependencies are just pydantic, msgpack, and typer for the CLI. Database drivers are bundled in the Rust core (uses sqlx under the hood), so you don't need to install asyncpg/aiosqlite/etc separately either.

a bit tangent question: the communication between Python & Rust, could the pyo3 ser/de of Python objects be better than MsgPack?

Good question. Working with Python objects in PyO3 requires holding the GIL. With MessagePack, Python serializes to bytes, hands them off, and Rust works completely GIL-free from that point. Same on the way back. So the GIL is held only for the brief serde step, not during SQL generation or execution.

I’m curious: why was this marked dead?

As a non ORM person - I do love the Pydantic functionality that comes out of the box w pyscopg3.

https://www.psycopg.org/psycopg3/docs/advanced/typing.html#e...

I mean it would work with attrs and dataclasses as well and you'd also get static type checking and they are faster.

Oh this is super cool. Coming from Java I've long missed JDBI and Rosetta which makes writing SQL Queries in Java a dream. I've toyed around with a similar style interface for python, and looking at this give me hope I can achieve it.

wow big if true. our core codebase is still on psycopg2 and we do this mapping ourselves. type introspective would be very handy for reducing that boilerplate.

This is pretty similar to what django-ninja offers as well with their Schema modeling [1]. I found it to be a big help personally and always wanted something for fastapi.

Django-ninja is essentially the fastapi of the django world so this library would be enough for me to go all in on fastapi. I just never felt like I found a python orm with the level of ergonomics of django + async of modern python

[1] https://django-ninja.dev/guides/response/django-pydantic/

Seeing a new library in 2026 that uses Django-style filter syntax is really surprising.

With SQLAlchemy approach I can easily find all usages of a particular column everywhere.

    .filter(Folder.user_id == user_id)
Versus

    .filter(user_id=user_id)
Grepping by user_id will obviously produce hundreds and thousands of matches, probably from dozens of tables.

You can't call filter() without a model. It's always Folder.objects.filter(user_id=user_id), so the context is right there in the code. Plus the generated .pyi stubs give your IDE full type info per model, so "go to usages" works through the type system.

When it’s a simple single-line query: sure.

When they’re 30 lines apart, because it’s a function that builds a complex query with joins and filters based on some function arguments: not easy.

I don’t really understand how .pyi will give me proper Find usages here. I haven’t used them a lot, though, and just rely on pycharm Find Usages or just rg.

Also, what about foreign keys? Let’s say it’s a filter on user__email__startswith. Are you able to find it when executing Find Usages on User.email field?

[dead]

I've wasted so many hours keeping API schemas and ORM models in sync on FastAPI projects.. how does it handle schema evolution on larger projects? Like if I rename a field or split a model into 2, does makemigrations detect that correctly or does it treat it as a drop + create?

Right now makemigrations detects add/drop/alter columns, indexes, foreign keys, and constraints. Rename is treated as drop + create, same as Django's default. Automatic rename detection is on the roadmap but not there yet. For now you'd edit the generated migration manually if you need a rename. Splitting a model into two would be detected as one dropped table and two new ones, so you'd also want to adjust that migration by hand to preserve data.

The coupling concern is valid but I think it's the right trade-off for most CRUD apps. In practice, 80% of your endpoints are thin wrappers around your models anyway. The remaining 20% where you need separate request/response schemas — you just write those by hand.

The auto-generated admin panel is a nice touch. That alone saves days on internal tooling.

Well said, that's pretty much how I see it too. Thanks!

Thanks for working on this mr_Fatalyst, this looks very promising. I'll keep an eye on this. Can't wait to dump the mess that is SQLAlchemy. Btw, you're getting a lot of stars on GitHub right now ;)

Thanks, appreciate the support (^_^)!

This looks great, and like it could address a need in ecosystem. Also, the admin dashboard is such a great feature of django, nice job building it from the get-go.

Thanks, appreciate it! Felt wrong to ship an ORM without one.

I think Prisma does type-safe ORM really well on the typescript side, and was sad it doesn't seem to be super supported in python. This feels sort of similar and makes a lot of sense!

Thanks! Haven't used Prisma much myself, but glad the approach resonates.

This is great, you’ve taken the pieces of Django I’m most envious of (the amazing ORM and Admin panel), and made them available separately. Timing couldn’t be better just as I’m starting to use Quart in earnest.

Thanks! The admin panel supports Quart out of the box, so you should be good to go.

Lowkey hope this replaces Django ORM

Thanks! Not really trying to replace Django ORM though, it's great at what it does. Just trying to build the ORM I'd personally want to use in 2026.

Is it possible to initialise a fresh database without manually running commands to run migrations? I'm not seeing anything in the docs to do so.

Right now it's CLI-based: 'oxyde migrate'. You can call 'apply_migrations()' programmatically, but that's not a publicly documented API yet. Good point though, worth adding.

This sounds great and there's a real gap in the ecosystem for a tool like this. https://sqlmodel.tiangolo.com/ looked promising but it's actually worse than useless because if you add it to your Pydantic models, it disables all validation: https://github.com/fastapi/sqlmodel/issues/52

Thanks! Yeah, that SQLModel issue is actually one of the things that pushed me to build this. In Oxyde the models are just Pydantic BaseModel subclasses, so validation always works, both on the way in and on the way out via model_validate().

interesting! the django filter syntax is a nice touch, i've always found it pretty intuitive. i've bounced between django and sqlalchemy, and it makes sense that a tighter integration like this could be faster for simpler apps.

Coupling together these things is short-sighted, but I get that in simple CRUD backends they really do sometimes stay the same for a long time. As long as there is an easy and obvious way out, then it's probably fine.

The big problem with ActiveRecord style ORMs, though, is the big ball of mud you end up with when anything from anywhere in the code can call `save()` on an object at any time to serialise and persist it in the db. It requires a constant vigilance to stop this happening, but many people don't even try in the first place.

What would be ideal is to have an automatically generated DTO-type object that you can pass to other parts of the code that shouldn't be calling `save()` themselves. It could be a "real" object, or just annotated using a Protocol, such that calling `save()` would be a type error. Django models unfortunately don't work well with Prototypes; mypy isn't able to detect the as supporting the Protocol (see: https://github.com/python/mypy/issues/5481). But having an automatically generated DTO could work too.

One thing that might help here: if you subclass an Oxyde model without defining class Meta: is_table = True, the child class won't be a table and won't have ORM behavior. So you can inherit the fields and validation but without save()/delete(). Not exactly a Protocol-based approach, but it gives you a clean read-only DTO derived from the same model.

Cool, that does sound like what I was after.

To expand a bit on this, what I'm thinking about is a modular monolith architecture. It's also a pragmatic approach where you don't need to split into separate (micro)services yet, but you still want things to be modular and able to split later if need be.

While things are still in the same monolith there's no point actually doing the serialise/deserialise step to enable integration between modules, so you can just have modules call each others services directly. Having the automatic DTO means in a service you could just do something like:

    def get_all() -> Iterable[ModelDTO]:
       for obj in Model.objects.as_dtos():
          yield obj

This same service could then be used in the router which would perform the extra serialisation step which would, of course, still work fine on the very same DTOs.

I tend to find the approach of "clean" domain model (using e.g. attrs or dataclasses), SQLAlchemy for db persistence (in classic mapping mode), and serialisation (using e.g. cattrs) a more elegant architecture, as opposed to shipping serialisation and persistence around with every object. But I know people struggle with such a rigid up-front architecture and most prefer Django, so I'm always looking for a pragmatic middle ground.

The way I see it, having everything as Pydantic makes this natural. Your DB model, your request schema, your response DTO are all BaseModels. Converting between them is just model_dump() and model_validate(), or plain inheritance. No adapters, no mapping layers. So when you need to split things apart, it's straightforward rather than painful.

We need more creative names for rust packages because this is going to cause confusion

There's already Oxide computers https://oxide.computer/ and Oxc the JS linter/formatter https://oxc.rs/.

Ruby was forward thinking enough to give libraries bizarre pet names that had no obvious connection to their functionality.

Pundit, Capybara, Bullet, Grape, Faraday...

Yeah, we're running out of ways to spell oxide (^_^)

1. AI Slop Post

2. """You define Pydantic models for your API, then define separate ORM models for your database, then write converters between them.""" So you could've written something that let you convert between two. That would not warrant a whole new ORM.

3. """But infrastructure stuff like SQL generation, connection pooling, and row serialization is where a systems language makes sense."""

This makes no sense to me (except row serialization - maybe, but you're incurring a messagepack overhead instead). Unnecessary native dependency.

1. We're all AI agents in a simulation anyway...

2. A converter still means maintaining two model systems. The point was to not have two in the first place.

3. MessagePack overhead is negligible compared to actual DB round-trips. And the Rust core isn't just SQL generation, it bundles the full driver stack (sqlx), pooling, and streaming, so you don't need asyncpg/aiosqlite as separate dependencies.

This is great. Are there any live sites running it in production?

Django + async is a nightmare so this is a very welcome project.

Thanks! Not yet, it's still v0.5 and the API hasn't fully stabilized. But it's getting there.

Didn't the committee agree that ORMs were a mistale and a thing of the past?

Must have missed that meeting. ORMs are not for everything, but for CRUD-heavy apps with validation they save a lot of boilerplate. And there's always execute_raw() for when you need to go off-script.

If there's an Open Source application that you can show me as a good example of ORM usage, I'd be interested in seeing it as a steelman argument for them.

For Oxyde specifically, it's still a young project, so the best public examples I have are the FastAPI tutorial (https://github.com/mr-fatalyst/fastapi-oxyde-example) and the admin panel examples (https://github.com/mr-fatalyst/oxyde-admin). Bigger real-world showcases will come with time. But in general, ORMs pay for themselves when you have lots of models, relations, and standard CRUD with validation. The moment you hand-write the same INSERT/UPDATE/SELECT with joins for every endpoint, it adds up fast.

Here's OpenEdx extensively using ORM: https://github.com/openedx/openedx-platform

[dead]

Where ORMs are clearly weak is in generating suboptimal queries and making it too easy to create N+1 issues. My first introduction to ORMs was Ruby on Rails. You would rely on New Relic to identify performance issues and then fix them.

With solid AGENTS.md / CLAUDE.md, I do not think this would happen as much anymore in new code. So then it is just a matter of style preference (ORM vs whatever else).

That's exactly why Oxyde has no lazy loading at all. If you don't call .join() or .prefetch(), related data simply won't be there. N+1 is impossible by design, not by discipline.

Micro-orms (mapping to parameters and from columns) are generally fine last i read. It is the general aversion to writing a SELECT that is suspect.

> But infrastructure stuff like SQL generation, connection pooling, and row serialization is where a systems language makes sense.

not really, what makes sense is being JIT-able and friendly to PyPy.

> Type safety was a big motivation.

> https://oxyde.fatalyst.dev/latest/guide/expressions/#basic-u...

> F("views") + 1

If your typed query sub-language can't avoid stringly references to the field names already defined by the schema objects, then it's the lost battle already.

The Rust core is not just about speed. It bundles native database drivers (sqlx), connection pooling, streaming serialization. It's more about the full IO stack than just making Python faster. On F("views"), fair point. It's a conscious trade-off for now. The .pyi stubs cover filter(), create(), and other query methods, but F() is still stringly-typed. Room for improvement there.

> It's more about the full IO stack than just making Python faster.

Does it mean that your db adapter isn't necessarily DBAPI (PEP-249) compliant? That is, it could be that DBAPI exception hierarchy isn't respected, so that middlewares that expect to work across the stack and catch all DB-related issues, may not work if the underlying DB access stack is using your drivers?

> but F() is still stringly-typed. Room for improvement there.

Yeah, I'm pretty sure F() isn't needed. You can look at how sqlalchemy implements combinator-style API around field attributes:

    import sqlalchemy as sa
    from sqlalchemy.orm import DeclarativeBase


    class Base(sa.orm.DeclarativeBase):
        pass

    class Stats(Base):
        __tablename__ = "stats"

        id = sa.Column(sa.Integer, primary_key=True)
        views = sa.Column(sa.Integer, nullable=False)

    print(Stats.views + 1)
The expression `Stats.views + 1` is self-contained, and can be re-used across queries. `Stats.views` is a smart object (https://docs.sqlalchemy.org/en/21/orm/internals.html#sqlalch...) with overloaded operators that make it re-usable in different contexts, such as result getters and query builders.

Right, it's not DBAPI compliant. The whole IO stack goes through Rust/sqlx, so PEP-249 doesn't apply. Oxyde has its own exception hierarchy (OxydeError, IntegrityError, NotFoundError, etc.). In practice most people catch ORM-level exceptions rather than DBAPI ones, but fair to call out.

On F(), good point. The descriptor approach is something I've been thinking about. Definitely on the radar.

[deleted]

[dead]

[dead]

Oxyde looks like a solid project. I've been curious if there are any benchmarks comparing it to SQLAlchemy's async setup. Specifically, I'm interested in how much of a performance bump that Rust core actually gives you when things get busy.

Yep, there's a full comparison including SQLAlchemy async: https://oxyde.fatalyst.dev/latest/advanced/benchmarks/