You're paying people to do the role either way, if it's not dedicated staff then it's taking time away from your application developers so they can play the role of underqualified architects, sysadmins, security engineers.
You're paying people to do the role either way, if it's not dedicated staff then it's taking time away from your application developers so they can play the role of underqualified architects, sysadmins, security engineers.
From experience (because I used to do this), it’s a lot less time than a self-hosted solution, once you’re factoring in the multiple services that need to be maintained.
As someone who has done both.. i disagree, i find self hosting to a degree much easier and much less complex
Local reproducibility is easier, and performance is often much better
It depends entirely on your use case. If all you need is a DB and Python/PHP/Node server behind Nginx then you can get away with that for a long time. Once you throw in a task runner, emails, queue systems, blob storage, user-uploaded content, etc. you can start running beyond your own ability or time to fix the inevitable problems.
As I pointed out above, you may be better served mixing and matching so you spend your time on the critical aspects but offload those other tasks to someone else.
Of course, I’m not sitting at your computer so I can’t tell you what’s right for you.
I mean, fair, we are ofc offloading some of that.. email being one of those, LLM`s being another thing.
Task runner/que at least for us postgres works for both cases.
We also self host an s3 storage and allow useruploaded content in within strict borders.
Yeah, and nobody is looking at the other side of this. There just are not a lot of good DBA/sysop type who even want to work for some non-tech SMB. So this either gets outsourced to the cloud, or some junior dev or desktop support guy hacks it together. And then who knows if the backups are even working.
Fact is a lot of these companies are on the cloud because their internal IT was a total fail.
If they just paid half of the markup they currently pay for the cloud I'm sure they'll be swimming in qualified candidates.
Our AWS spend is something like $160/month. Want to come build bare metal database infrastructure for us for $3/day?
When you need to scale up and don't want that $160 to increase 10x to handle the additional load the numbers start making more sense: 3 month's worth of the projected increase upfront is around 4.3k, which is good money for a few days' work for the setup/migration and remains a good deal for you since you break even after 3 months and keep on pocketing the savings indefinitely from that point on.
Of course, my comment wasn't aimed at those who successfully keep their cloud bill in the low 3-figures, but the majority of companies with a 5-figure bill and multiple "infrastructure" people on payroll futzing around with YAML files. Even half the achieved savings should be enough incentive for those guys to learn something new.
> few days' work
But initial setup is maybe 10% of the story. The day 2 operations of monitoring, backups, scaling, and failover still needs to happen, and it still requires expertise.
If you bring that expertise in house, it costs much more than 10x ($3/day -> $30/day = $10,950/year).
If you get the expertise from experts who are juggling you along with a lot of other clients, you get something like PlanetScale or CrunchyData, which are also significantly more expensive.
> monitoring
Most monitoring solutions support Postgres and don't actually care where your DB is hosted. Of course this only applies if someone was actually looking at the metrics to begin with.
> backups
Plenty of options to choose from depending on your recovery time objective. From scheduled pg_dumps to WAL shipping to disk snapshots and a combination of them at any schedule you desire. Just ship them to your favorite blob storage provider and call it a day.
> scaling
That's the main reason I favor bare-metal infrastructure. There is no way anything on the cloud (at a price you can afford) can rival the performance of even a mid-range server that scaling is effectively never an issue; if you're outgrowing that, the conversation we're having is not about getting a big DB but using multiple DBs and sharding at the application layer.
> failover still needs to happen
Yes, get another server and use Patroni/etc. Or just accept the occasional downtime and up to 15 mins of data loss if the machine never comes back up. You'd be surprised how many businesses are perfectly fine with this. Case in point: two major clouds had hour-long downtimes recently and everyone basically forgot about it a week later.
> If you bring that expertise in house
Infrastructure should not require continuous upkeep/repair. You wouldn't buy a car that requires you to have a full-time mechanic in the passenger seat at all times. If your infrastructure requires this, you should ask for a refund and buy from someone who sells more reliable infra.
A server will run forever once set up unless hardware fails (and some hardware can be redundant with spares provisioned ahead of time to automatically take over and delay maintenance operations). You should spend a couple hours a month max on routine maintenance which can be outsourced and still beats the cloud price.
I think you're underestimating the amount of tech that is essentially nix machines all around you that somehow just... work* despite having zero upkeep or maintenance. Modern hardware is surprisingly reliable and most outages are caused by operator error when people are (potentially unnecessarily) messing with stuff rather than the hardware failing.
At 160/mo you are using so little you might as well host off of a raspberry pi on your desk with a USB3 SSD attached. Maintenance and keeping a hot backup would take a few hours to set up, and you're more flexible too. And if you need to scale, rent a VPS or even dedicated machine from Hetzner.
An LLM could set this up for you, it's dead simple.
I'm not going to put customer data on a USB-3 SSD sitting on my desk. Having a small database doesn't mean you can ignore physical security and regulatory compliance, particularly if you've still got reasonable cash flow. Just as one example, some of our regulatory requirements involve immutable storage - how am I supposed to make an SSD that's literally on my desk immutable in any meaningful way? S3 handles this in seconds. Same thing with geographically distributed replicas and backups.
I also disagree that the ongoing maintenance, observability, and testing of a replicated database would take a few hours to set up and then require zero maintenance and never ping me with alerts.
Nice troll. But TFA is about corporate IT so hopefully you get whatever.
For companies not heavily into tech, lots of this stuff is not that expensive. Again, how many DBAs are even looking for a 3 hr/month sidegig?