But in this case, it is like saying "You don't need a fuel truck. You can transport 9,000 gallons of gasoline between cities by gathering 9,000 1-gallon milk jugs and filling each, then getting 4,500 volunteers to each carry 2 gallons and walk the entire distance on foot."
In this case, you do just need a single fuel truck. That's what it was built for. Avoiding using a design-for-purpose tool to achieve the same result actually is wasteful. You don't need 288 cores to achieve 243,000 messages/second. You can do that kind of throughput with a Kafka-compatible service on a laptop.
[Disclosure: I work for Redpanda]
I'll push the metaphor a bit: I think the point is that if you have a fleet of vehicles you want to fuel, go ahead and get a fuel truck and bite off on that expense. However, if you only have 1 or 2, a couple of jerry cans you probably already have + a pickup truck is probably sufficient.
Getting a 288-core machine might be easier than setting up Kafka; I'm guessing that it would be a couple of weeks of work to learn enough to install Kafka the first time. Installing Postgres is trivial.
"Lots of the team knows Postgres really well, nobody knows Kafka at all yet" is also an underrated factor in making choices. "Kafka was the ideal technical choice but we screwed up the implementation through well-intentioned inexperience" being an all too plausible outcome.
Indeed, I've seen this happen first hand where there was really only one guy who really "knew" Kafka, and it was too big of a job for just him. In that case it was fine until he left the company, and then it became a massive albatross and a major pain point. In another case, the eng team didn't really have anyone who really "knew" Kafka but used a managed service thinking it would be fine. It was until it wasn't, and switching away is not a light lift, nor is mass educating the dev team.
Kafka et al definitely have their place, but I think most people would be much better off reaching for a simpler queue system (or for some things, just using Postgres) unless you really need the advanced features.
I'm wondering why there wasn't any push for the Kafka guy to share his knowledge within his team, or to other teams?
Multiple factors (neither a good excuse, just reality):
* Lack of interest for other team members, which translated to doing what they thought was a sufficiently minimal amount of knowledge transfer
* An (unwise) attitude that "it's already set up and configured, and terraformed, so we can just acquire that knowledge if and when it's needed"
* Kafka guy left a lot faster than anybody really expected, not leaving much time and practically no documentation
* The rest of the team was already overwhelmed with other responsiblities and didn't have much bandwidth available
* Nobody wanted to be the person/people that ended up "owning" it, so there was a reverse incentive
Interesting, thanks!
This is the crux of my point.
Postgres is the solution in question of the article because I simply assume the majority of companies will start with Postgres as their first piece of infra. And it is often the case. If not - MySQL, SQLite, whatever. Just optimize for the thing you know, and see if it can handle your use case (often you'll be surprised)
Just use Strimzi if you're in a K8s world (disclosure used to work on Strimzi for RH, but I still think it's far better than Helm charts or fully self-managed, and far cheaper than fully managed).
Thanks! I didn't know about Strimzi!
Even though I'm a few years on from Red Hat, I still really recommend Strimzi. I think the best way to describe it is "a sorta managed Kafka". It'll make things that are hard in self-managed Kafka (like rolling upgrades) easy as.
The only thing that might take "weeks" is procrastination. Presuming absolutely no background other than general data engineering, a decent beginner online course in Kafka (or Redpanda) will run about 1-2 hours.
You should be able to install within minutes.
I mean, setting up Zookeeper, tweaking the kernel settings, configuring the hardware, the kind of stuff mentioned in guides like https://medium.com/@ankurrana/things-nobody-will-tell-you-se... and https://dungeonengineering.com/the-kafkaesque-nightmare-of-m.... Apparently you can do without Zookeeper now, but that's another choice to make, possibly doing careful experiments with both choices to see what's better. Much more discussion in https://news.ycombinator.com/item?id=37036291.
None of this applies to Redpanda.
True. Redpanda does not use Zookeeper.
Yet to also be fair to the Kafka folks, Zookeeper is no longer default and hasn't been since April 2025 with the release of Apache Kafka 4.0:
"Kafka 4.0's completed transition to KRaft eliminates ZooKeeper (KIP-500), making clusters easier to operate at any scale."
Source: https://developer.confluent.io/newsletter/introducing-apache...
Right, I was talking about installing Kafka, not installing Redpanda. Redpanda may be perfectly fine software, but bringing it up in that context is a bit apples-and-oranges since it's not open-source: https://news.ycombinator.com/item?id=45748426
Good on you for being fair in this discussion :)