There's no high performance database that wont take all of your memory (at least for size of data) if you let it.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
MySQL doesnt eat up all 8GB of my system when I need to query a table with indexed values, MongoDB seems to eat it all up.
You paid one hundred bucks for that eight gb of ram, do you really want it to just sit there unused?
No, but my manager was wondering why our website was slowing to a crawl.
Is the DB on the same host as the web server?
It is more likely they did not leave enough overhead for the host operating system, which is a classic issue.
I don't really remember, to be fair this was nearly 10 years ago now. Upon some googling now, I do see a way to limit just how much Mongo sucks up for data + index. I am curious if it would have been a smoother experience, if this configuration was even available then.
[dead]
If the data is < ram size and if you read that data again and its off disk again its the slowest it can possibly be, there's a reason most databases implement a buffer cache (actually making writes insanely faster as well) but yeah, MySQL is generally not a very good operational database with all the ones I have tinkered with.