For me the killer feature of Kafka was the ability to set the offset independently for each consumer.

In my company most of our topics need to be consumed by more than one application/team, so this feature is a must have. Also, the ability to move the offset backwards or forwards programmatically has been a life saver many times.

Does Postgres support this functionality for their queues?

Isn't it just a matter of having each consumer use their own offset? I mean if the queue table is sequentially or time-indexed, the consumer just provides a smaller/earlier key to accomplish the offset? (Maybe I'm missing something here?)

Correct, offsets and sharding aren't magic. And partitions in Kafka are user defined, just like they would be for postgresql.

Kafka allows you to have a consumer group… you can have multiple workers processing messages in parallel, and if they all use the same group id, the messages will be sharded across all the workers using that key… so each message will only be handled by one worker using that key, and every message will be given to exactly one worker (with all the usual caveats of guaranteed-processed-exactly-once queues). Other consumers can use different group keys and they will also get every single message exactly once.

So if you want an individual offset, then yes, the consumer could just maintain their own… however, if you want a group’s offset, you have to do something else.

Yes.

Is a queuing system baked into Postgres? Or there client libraries that make it look like one?

And do these abstractions allow for arbitrarily moving the offset for each consumer independently?

If you're writing your own queuing system using pg for persistence obviously you can architect it however you want.

The article basically states unless you need a lot of throughput, you probably don't need Kafka. (my interpretation extends to say) You probably don't need offsets because you don't need multi-threaded support because you don't need multiple threads.

I don't know what kind of native support PG has for queue management, the assumption here is that a basic "kill the task as you see it" is usually good enough and the simplicity of writing and running a script far outweighs the development, infrastructure and devops costs of Kafka.

But obviously, whether you need stuff to happen in 15 seconds instead of 5 minutes, or 5 minutes instead of an hour is a business decision, along with understanding the growth pattern of the workload you happen to have.

PG has several queue management extensions and I’m working my way through trying them out.

Here is one: https://pgmq.github.io/pgmq/

Some others: https://github.com/dhamaniasad/awesome-postgres

Most of my professional life I have considered Postgres folks to be pretty smart… while I by chance happened to go with MySQL and it became the rdbms I thought in by default.

Heavily learning about Postgres recently has been okay, not much different than learning the tweaks for msssl, oracle or others. Just have to be willing to slow down a little for a bit and enjoy it instead of expecting to thrush thru everything.

pgmq looks cool, thanks for the link!

But it looks like a queue, which is a fundamentally different data structure from an event log, and Kafka is an event log.

They are very different usecases; work distribution vs pub/sub.

The article talks about both usecases, assuming the reader is very familiar with the distinction.

Well in my workplace we need all of those things.