wait we're not in the nosql era anymore?

dynamo and mongo are huge, redis and kafka (and their clones) are ubiquitous, etc etc

> wait we're not in the nosql era anymore?

Kinda. It turned out, that for the vast majority of users, a single Postgres instance on a reasonably large host is more than enough. Perhaps with a read replica as a hot standby.

You can easily get 1 million transactions per second from it (simple ones, granted). So why bother with NoSQL?

> redis and kafka (and their clones) are ubiquitous, etc etc

That's a bit different. Kafka is a message queue, and Redis is mostly used as a cache.

We’re not in the no-sql era anymore, because the prevailing marketing and “thought leadership” isn’t peaking these things _instead of_ a sql database. They’re now _parts_ of a system, of which SQL DB’s are still a very big part.

Oh God people are still using Mongo in production? Why?

Kafka exists but is deeply obsolete and mostly marginalized outside of things with dependencies on the weird way it works (Debezium, etc)

I've always liked Redis but choosing it as a core tech on a new product in the last, say, 6 years is basically malpractice? 10 if you're uncharitable.

The thing these all have in common is having their economics and ergonomics absolutely shattered by SSDs and cluster-virtualization-by-default (i.e. cloud and on-prem pseudo-cloud). They're just artifacts of a very narrow window of history where a rack of big-ram servers was a reasonable way of pairing storage IOPS to network bandwidth.

Dynamo is and always was niche. Thriving in its niche, but a specialized tool for specialized jobs.

I work for a database company and of my ~100 customer meetings last year, only one of the notes mentions Mongo as software they use in production. Maybe it’s a different world or something, idk, but I don’t understand the use case.

If I’m ingesting unstructured data for search or “parse it later” purposes, I’ll choose OpenSearch (elastic). Otherwise I’m going PG by default and if I need analytics I’ll use Parquet or Delta and pick the query engine based on requirements.

I honestly cannot think of a use case where Mongo is the appropriate solution.

What's the better kafka/redis? Mongo I know you can just use your favorite relational tool with JSON support if needed (PG/MYSQL)

If you're already built around Redis I'd just keep using it, but if you're doing new development there's not so much a single drop in replacement as a substantially better alternative for any given feature (and not particularly any advantage to having all your data in the "same Redis instance"). That said, 90+% of the time the answer is probably "transactional SQL database" or "message queue"

For Kafka, the answer is probably an object store, a message queue, a specialized logging system, an ordinary transactional database table, or whatever mechanism your chosen analytics DB uses for bulk input (probably S3 or equivalent these days). Or maybe just a REST interface in front of a filesystem. Unless of course you truly need to interface with a Kafka consumer/producer in which case you’re stuck with it (the actual reason I've seen for every Kafka deployment I've personally witnessed in recent history)

> What's the better kafka/redis?

If you are going to leverage caching I’d use the OSS Valkey over Redis. Based on the company’s past behavior, Redis is dead to me now.

[deleted]