I disagree. They found that Postgres, without tuning, was easily fast enough on low level hardware and would come with the benefit of not deploying another service. Additionally tuning it isn’t really relevant.

If the defaults are fine for a use case then unless I want to tune it for personal interest it’s either a poor use of my fun time or a poor use of my clients funds.

The default shared memory is 128MiB, not even 1% of typical machines today. A benchmark run with these settings is effectively crippling your hardware by making sure 99% of your available memory is ignored by postgres. It's an invalid benchmark, unless redis is similarly crippled.

> If the defaults are fine for a use case then unless I want to tune it for personal interest it’s either a poor use of my fun time or a poor use of my clients funds.

It doesn't matter if you've crippled the benchmark if the performance of both options still exceeds your expectations. Not all of us are trying eek out every drop of performance

And, well, if you are then you can ignore the entire post because Redis offers better perf than postgres and you'd use that. It's that simple.

> Not all of us are trying eek out every drop of performance

You probably mean "eke out". Unless the performance is particularly scary :)

good point, even postgres crippled was "good enough" so it doesn't change the overall message. Nonetheless, we should strive to do realistic and valid benchmarks, no?

wow, default 128MiB sounds so stupid

"If we don't need performance, we don't need caches" feels like a great broader takeaway here.

A cache being fast enough doesn’t mean no caching is relevant - I’m not sure why you’d equate the two.

Sometimes, a cache is all about reducing expense: I.e, free cache query vs expensive API query.

Sometimes people host software on a server they own or rent, the server is plenty fast, and it costs literally nothing to issue those queries at the scale on which they’re needed.

Yes, that is true, but the original poster said getting rid of caches was always a good idea, when in reality the answer (as usual with engineering) is “it depends.”

I see people downvoting this. Anyone who disagrees with this, we have YAGNI for a reason - if someone said to me my performance was fine and they added caches, I would look at them with a big hairy eyeball because we already know cache invalidation is a PITA, that correctness issues are easy to create, and now you have the performance of two different systems to manage.

Amazon actually moved away from caches for some parts of its system because consistent behavior is a feature, because what happens if your cache has problems and the interaction between that and your normal thing is slow? What if your cache has some bugs or edge case behavior? If you don't need it you are just doing a bunch of extra work to make sure things are in sync.

> "If we don't need performance, we don't need caches" feels like a great broader takeaway here.

I don't think this holds true. Caches are used for reasons other than performance. For example, caches are used in some scenarios for stampede protection to mitigate DoS attacks.

Also, the impact of caches on performance is sometimes negative. With distributed caching, each match and put require a network request. Even when those calls don't leave a data center, they do cost far more than just reading a variable from memory. I already had the displeasure of stumbling upon a few scenarios where cache was prescribed in a cargo cult way and without any data backing up the assertion, and when we took a look at traces it was evident that the bottleneck was actually the cache itself.

DoS is a performance problem, if your server was infinitely fast with infinite storage they wouldnt be an issue.

> DoS is a performance problem

Not really. Running out of computational resources to fulfill requests is not a performance issue. Think of thinks such as exhausting a connection pool. More often than not, some components of a system can't scale horizontally.

It is actually a financial problem too. Servers stop working when the bill goes unpaid. Sad but true.

If my gandma had wheels it would be a car.

> They found that Postgres, without tuning, was easily fast enough on low level hardware

Is that production? When you basket it into "low level" it sounds like a base case but it really isn't.

In production you don't have local storage, RAM being used for all kinds of other things, your CPU only available in small slices, network effects and many others.

> If the defaults are fine for a use case

Which I hope isn't the developer's edition of it works on my machine.