> I thought this would be about the horrors of hosting/developing/debugging on “Serverless” but it’s about pricing over-runs.
Agreed about that. I was hired onto a team that inherited a large AWS Lambda backend and the opacity of the underlying platform (which is the value proposition of serverless!) has made it very painful when the going gets tough and you find bugs in your system down close to that layer (in our case, intermittent socket hangups trying to connect to the secrets extension). And since your local testing rig looks almost nothing like the deployed environment...
I have some toy stuff at home running on Google Cloud Functions and it works fine (and scale-to-zero is pretty handy for hiding in the free tier). But I struggle to imagine a scenario in a professional setting where I wouldn't prefer to just put an HTTP server/queue consumer in a container on ECS.
I've had similar experiences with Azures services. Black boxes impossible to troubleshoot. Very unexpected behavior people aren't necessarily aware of when they initially spin these things up. For anything important I just accept the pain of deploying to kubernetes. Developers actually wind up preferring it in most cases with flux and devsoace.
I recently had customer who had smart idea to protect Container Registry with firewall... Breaking pretty much everything in process. Now it kinda works after days of punching enough holes in... But I still have no idea where does something like Container registry pull stuff from, or App Service...
And does some of their suggested solutions actually work or not...
Convince them to add IPv6 and you’ll be set for life
They did!
But they network address translate (NAT) IPv6, entirely defeating the only purpose of this protocol.
It's just so, so painful that I have no words with which I can adequately express my disdain for this miserable excuse for "software engineering".
Every time I've done a cost benefit analysis of AWS Lambda vs running a tiny machine 24/7 to handle things, the math has come out in favor of just paying to keep a machine on all the time and spinning up more instances as load increase.
There are some workloads that are suitable for lambda but they are very rare compared to the # of people who just shove REST APIs on lambda "in case they need to scale."
Is that what people do is test/develop primarily with local mocks of the services? I assumed it was more like you deploy mini copies of the app to individual instances namespaced to developer or feature branch, so everyone is working on something that actually fairly closely approximates prod just without the loading characteristics and btw you have to be online so no working on an airplane.
There are many paths. Worst case, I've witnessed developers editing Lambda code in the AWS console because they had no way to recreate the environment locally.
If you can't run locally, productivity drops like a rock. Each "cloud deploy" wastes tons of time.
SST has the best dev experience but requires you be online. They deploy all the real services (namespaced to you) and then instead of your function code they deploy little proxy lambdas that pass the request/response down to your local machine.
It’s still not perfect because the code is running locally but it allows “instant” updates after you make local changes and it’s the best I’ve found.
Mocks usually don’t line up with how things run in prod. Most teams just make small branch or dev environments, or test in staging. Once you hit odd bugs, serverless stops feeling simple and just turns into a headache.
Yeah, I’ve never worked at one of those shops but it’s always sounded like a nightmare. I get very anxious when I don’t have a local representative environment where I can get detailed logs, attach a debugger, run strace, whatever.
[dead]