Periodic polling of a DB gets bad pretty quick, queues are much better even on small scale.

But then distributed queue is most likely not needed until you hit really humongous scale.

Maybe in the past this was true, or if you’re using an inferior DB. I know first hand that a Postgres table can work great as a queue for many millions of events per day processed by thousands of workers polling for work from it concurrently. With more than a few hundred concurrent pollers you might want a service, or at least a centralized connection pool in front of it though.

Millions of events per day is still in the small queue category in my book. Postgres LISTEN doesn't scale, and polling on hot databases can suddenly become more difficult, as you're having to throw away tuples regularly.

10 message/s is only 860k/day. But in my testing (with postgres 16) this doesn't scale that well when you are needing tens to hundreds of millions per day. Redis is much better than postgres for that (for a simple queue), and beyond that kafka is what I would choose in you're in the low few hundred million.

This "per hour" and "per day" business has to end. No one cares about "per day" and it makes it much harder to see the actual talked about load on a system. The thing that matters is "per second", so why not talk about exactly that? Load is something immediate, it's not a "per day" thing.

If someone is talking about per day numbers or per month numbers they're likely doing it to have the numbers sound more impressive and to make it harder to see how few X per second they actually handled. 11 million events per day sounds a whole lot more impressive than 128 events per second, but they're the same thing and only the latter usually matters in these types of discussions.