That argument does not hold when there is aws serverless pg available, which cost almost nothing for low traffic and is vastly superior to self hosting regarding observability, security, integration, backup ect...

There is no reason to self manage pg for dev / environnement.

https://aws.amazon.com/rds/aurora/serverless/

"which cost almost nothing for low traffic" you invented the retort "what about high traffic" within your own message. I don't even necessarily mean user traffic either. But if you constantly have to sync new records over (as could be the case in any kind of timeseries use-case) the internal traffic could rack up costs quickly.

"vastly superior to self hosting regarding observability" I'd suggest looking into the cnpg operator for Postgres on Kubernetes. The builtin metrics and official dashboard is vastly superior to what I get from Cloudwatch for my RDS clusters. And the backup mechanism using Barman for database snapshots and WAL backups is vastly superior to AWS DMS or AWS's disk snapshots which aren't portable to a system outside of AWS if you care about avoiding vendor lock-in.

This was true for RDS serverless v1 which scaled to 0 but is no longer offered. V2 requires a minimum 0.5 ACU hourly commit ($40+ /mo).

V2 scales to zero as of last year.

https://aws.amazon.com/blogs/database/introducing-scaling-to...

It only scales down after a period of inactivity though - it’s not pay-per-request like other serverless offerings. DSQL looks to be more cost effective for small projects if you can deal with the deviations from Postgres.

Ah, good to know, I hadn't seen that V2 update. Looks like a min 5m inactivity to auto-pause (i.e., scale to 0), and any connection attempt (valid or not) resumes the DB.

Aurora serverless requires provisioned compute - it’s about $40/mo last time I checked.

The performance disparity is just insane.

Right now from Hetzner you can get a dedicated server with 6c/12t Ryzen2 3600, 64GB RAM and 2x512GB Nvme SSD for €37/mo

Even if you just served files from disc, no RAM, that could give 200k small files per second.

From RAM, and with 6 dedicated cores, network will saturate long before you hit compute limits on any reasonably efficient web framework.

Just use a pg container on a vm, cheap as chips and you can do anything to em.