Well, it's impractical to try to handle messages individually in Kafka, it's designed to acknowledge entire batches (since it's a distributed append-only log). You can still do that, but the performance will be no better than an SQL database
That is the major difference - clients track their read offsets rather than the structure removing messages. There aren't really "listeners" in the sense of a pub-sub?
Exactly. There's no concept in Kafka (yet...) of "acking" or DLQs, Kafka is very good at what it does by being deliberately stupid, it knows nothing about your messages or who has consumed them and who hasn't.
That was all deliberately pushed onto consumers to manage to achieve scale.
Kafka is the MongoDB of sequential storage. Built for webscale and then widely adopted based on impressive single-metric numbers without regard for actual fitness to purpose in smaller operations. Fortunately it always what reliable enough.
I believe RabbitMQ is much more balanced and is closer to what people expect from a high level queueing/pubsub system.
What do you mean "no concept...of asking?" During consumption, you must either auto-commit offsets, or manually commit them. If you don't, you'll get the same events over and over again.
Or you can just store your next offset in a DB and tell the consumer to read from that offset - the Kafka offset storage topic is a convenient implementation of the most common usage pattern, but it's not mandatory to use - and again, the broker doesn't do anything with that information when serving you data.
Well, it's impractical to try to handle messages individually in Kafka, it's designed to acknowledge entire batches (since it's a distributed append-only log). You can still do that, but the performance will be no better than an SQL database
That is the major difference - clients track their read offsets rather than the structure removing messages. There aren't really "listeners" in the sense of a pub-sub?
> There aren't really "listeners" in the sense of a pub-sub?
They are still "listeners", it's that events aren't pushed to the listener. Instead the API is designed for sheer throughput.
Exactly. There's no concept in Kafka (yet...) of "acking" or DLQs, Kafka is very good at what it does by being deliberately stupid, it knows nothing about your messages or who has consumed them and who hasn't.
That was all deliberately pushed onto consumers to manage to achieve scale.
Kafka is the MongoDB of sequential storage. Built for webscale and then widely adopted based on impressive single-metric numbers without regard for actual fitness to purpose in smaller operations. Fortunately it always what reliable enough.
I believe RabbitMQ is much more balanced and is closer to what people expect from a high level queueing/pubsub system.
What do you mean "no concept...of asking?" During consumption, you must either auto-commit offsets, or manually commit them. If you don't, you'll get the same events over and over again.
Or you can just store your next offset in a DB and tell the consumer to read from that offset - the Kafka offset storage topic is a convenient implementation of the most common usage pattern, but it's not mandatory to use - and again, the broker doesn't do anything with that information when serving you data.
Acking in an MQ is very different.