I'm solidly in camp 2, the "common sense" camp that doesn't care about buzzwords.
That said, I don't consider running Kafka to be a headache. I work at a mid-sized company, processing billions of Kafka events per day and it's never been a problem, even locally when I'm processing hundreds of events per day.
You set it up, forget about it, and it scales endlessly. You don't have to rewrite anything and it provides a nice separation layer between your system components.
When starting out, you can easily run Kafka, DB, API on the same machine.
I also strongly believe it's not a headache.
Vendors frequently push that narrative so they can sell their own managed (or proprietary) solution on it. With a decent AI model (e.g ChatGPT Pro), it's easier than ever to figure out best practices and conventions.
That being said, my point is more about the organizational overhead. Deploying Kafka still means you need to learn how it works, why it's good, its configs, API, how to debug it, set up obesrvability, yada yada.
> processing billions of Kafka events per day
Except that the burden is on all clients to coordinate to avoid processing an event more than once since Kakfa is a brainless invention just dumping data forever into a serial log.
I'm not sure what you're talking about.
Do you mean different consumers within the same consumer group? There's no technology out there that will guarantee exactly-once delivery, it's simply impossible in a world where networks aren't magically 100% reliable. SQS, RedPanda, RabbitMQ, NATS... you call it, your client will always need idempotency.
That is called a 'consumer group' which has been a part of Kafka for 15 years.
The author is suggesting to avoid this solution and roll your own instead.