I really loved serverless for while, particularly in the early days, when building small projects. But AWS Lambdas, for example, is constant maintenance hell for larger applications, including building and dependency issues, debugging and slow deployment.
One feat is still amazes me - my AWS Lambda React webapp example (Todo with server rendering) which were deployed in 2019, still works as today, and I have not changed, or redeployed it since.
> AWS Lambdas, for example, is constant maintenance hell for larger applications, including building and dependency issues, debugging and slow deployment.
Maintenance hell is a symptom of the frameworks in use, not Lambda. If you’re using stable tools, you can go years before doing a 5 minute runtime update and then go years.
Debugging and deployment speed are a stronger argument - the best balance I’ve found is to mandate modular design and local development so developers can work locally except when they are troubleshooting environmental interactions. Framework complexity also matters here - if you’re deploying a heavyweight app using AWS SAM your deployments will be at least 1-2 orders of magnitude slower than a simple Lambda.
Lambdas do require runtime updates. This means nothing happens for a relatively long time and then suddenly the lambda stops working. If you don't have many dependencies, upgrading the runtime is easy. But if one of dependencies requires an older runtime, it's better not to wait until the last moment.
I run a few serverless stacks on AWS Lambda, have been for years and slept well all the time. Serverless is forgiving. Things heal and don't stay dead like it can happen with anything that carries state like a container.
That said, I do prefer the development model of containers. Run them anywhere. That said, has it's own limitations. For example, he claims to be able to run state within container. Doesn't make sense if you want to scale out. Persistence is a problem. You can't run DBs on ECS Fargate for example.
And the worst aspect of running containers is: in bigger orgs the standard will probably be K8s. And that has nothing to do any more with the simplicity of containers as mentioned in the article.
Containers don't carry state. They can be made to do so if you wish but there's nothing inherent to them that does it.
> in bigger orgs the standard will probably be K8s. And that has nothing to do any more with the simplicity of containers as mentioned in the article.
K8s can be very simple if there's a platform team ensuring great developer experience. I appreciate that this is likely rarer than you or I would like though.
> If your workload is truly intermittent and stateless, and you want zero operational effort, serverless can work.
And it works pretty well. A lot of internal and external JSON APIs are a good fit.
I've found the article ok, but it would have been a much nicer read without all the emotional stuff and just kept - serverless is pushed as panacea but actually isn't a good fit because...
The way I view serverless such as AWS Lambda, and services like SQS and SNS, in that you're supposed to use it to program the cloud environment, not to stick your business apps in it.
So he learned containers and wants everyone to use them because it raises his competitive advantage. I didn't learn containers, am stuck at VPS and virtual machines stage, it's easier for me. I do think containers is a scam.
this is a fair critique of what I would call "first-gen" serverless platforms, like AWS Lambda. it's a shame that it's ~5ish years late to the party.
because "just use a container" is more or less the solution that "second-gen" serverless platforms all offer.
but also this:
> A container keeps state (just add a Docker volume!)
is just absolutely terrible as general-purpose advice.
like, yes, it can be annoying that "serverless" platforms are generally stateless, which forces you to move your state into a hosted database of some kind.
but...that reflects the underlying reality of the cloud platform. the servers that your "serverless" code runs on are generally themselves stateless.
if you were to blindly follow this "just add a Docker volume" approach to managing state, you're in for a rude awakening the moment you want to scale your "serverless" code from 1 server to 2 servers.
and unsurprisingly, the article glosses over this a few paragraphs farther down:
> You can deploy one container, or ten. Scale them. Monitor them. Keep state. Run background jobs. Use your own database.
run 10 containers...each with their own Docker volume? use my own database? what. this is blogspam nonsense.
Until it does and then you are screwed without an easy way to get out if your application is anything else than trivial. It has it's uses; a simple page/site which won't get any users and a system that needs to elastically scale forever, but in between, you are just burning (your company) money for no reason other than HN said it is the best (a lot of people here are working at FAANG; it's baffling why anyone would take any advice from architecture ran at a company with 100k more employees than their own startup with all this cloud / devops garbage and resulting costs and headaches).
I’m not sure I understand this argument. Anything that runs in a lambda can also be run outside of a lambda. You just need to give yourself a way to invoke it or access it.
One setup I’ve used with a previous client was deploying many FastAPI lambdas, wrapped in Mangum. You can even in the same code base expose both. This way, you can deploy to a container and a lambda without changing any code.
In one case, we used APIGWv2 to give the lambda an HTTP endpoint, in the other, it’s the container that provides it. You can throw in IPv6 support by having a CF distro in front of both.
Fair critique on the promotion, but you do see that a lot round here.
I share the frustration though and the article resonated with me. I'm currently working on a project that's being inappropriately done in cloud functions, and it's just dumb. We'll never scale beyond a few hundred users, but there's a load of considerations and hoop jumping we'd have completely skipped if the org had stayed off the Kool Aid.
I really loved serverless for while, particularly in the early days, when building small projects. But AWS Lambdas, for example, is constant maintenance hell for larger applications, including building and dependency issues, debugging and slow deployment.
One feat is still amazes me - my AWS Lambda React webapp example (Todo with server rendering) which were deployed in 2019, still works as today, and I have not changed, or redeployed it since.
> AWS Lambdas, for example, is constant maintenance hell for larger applications, including building and dependency issues, debugging and slow deployment.
Maintenance hell is a symptom of the frameworks in use, not Lambda. If you’re using stable tools, you can go years before doing a 5 minute runtime update and then go years.
Debugging and deployment speed are a stronger argument - the best balance I’ve found is to mandate modular design and local development so developers can work locally except when they are troubleshooting environmental interactions. Framework complexity also matters here - if you’re deploying a heavyweight app using AWS SAM your deployments will be at least 1-2 orders of magnitude slower than a simple Lambda.
Why is it a maintenance hell? You mention your app runs unchanged for six years now.
Lambdas do require runtime updates. This means nothing happens for a relatively long time and then suddenly the lambda stops working. If you don't have many dependencies, upgrading the runtime is easy. But if one of dependencies requires an older runtime, it's better not to wait until the last moment.
I run a few serverless stacks on AWS Lambda, have been for years and slept well all the time. Serverless is forgiving. Things heal and don't stay dead like it can happen with anything that carries state like a container.
That said, I do prefer the development model of containers. Run them anywhere. That said, has it's own limitations. For example, he claims to be able to run state within container. Doesn't make sense if you want to scale out. Persistence is a problem. You can't run DBs on ECS Fargate for example.
And the worst aspect of running containers is: in bigger orgs the standard will probably be K8s. And that has nothing to do any more with the simplicity of containers as mentioned in the article.
> anything that carries state like a container
Containers don't carry state. They can be made to do so if you wish but there's nothing inherent to them that does it.
> in bigger orgs the standard will probably be K8s. And that has nothing to do any more with the simplicity of containers as mentioned in the article.
K8s can be very simple if there's a platform team ensuring great developer experience. I appreciate that this is likely rarer than you or I would like though.
> If your workload is truly intermittent and stateless, and you want zero operational effort, serverless can work.
And it works pretty well. A lot of internal and external JSON APIs are a good fit.
I've found the article ok, but it would have been a much nicer read without all the emotional stuff and just kept - serverless is pushed as panacea but actually isn't a good fit because...
Under-rated, but probably the best serverless offering: Cloud run on GCP. Pay like for a VM, but only for the time you're getting/serving requests.
(IMO, if it can get a fly.io like command line experience, it will thrive more.)
The thing i hated the most about aws lambda is connecting to postgres or using bcrypt... it's possible, but it felt like defeating Malenia.
What did you dislike about connecting to Postgres from AWS lambda?
Seems like a pitch for sliplane to me.
The way I view serverless such as AWS Lambda, and services like SQS and SNS, in that you're supposed to use it to program the cloud environment, not to stick your business apps in it.
So he learned containers and wants everyone to use them because it raises his competitive advantage. I didn't learn containers, am stuck at VPS and virtual machines stage, it's easier for me. I do think containers is a scam.
this is a fair critique of what I would call "first-gen" serverless platforms, like AWS Lambda. it's a shame that it's ~5ish years late to the party.
because "just use a container" is more or less the solution that "second-gen" serverless platforms all offer.
but also this:
> A container keeps state (just add a Docker volume!)
is just absolutely terrible as general-purpose advice.
like, yes, it can be annoying that "serverless" platforms are generally stateless, which forces you to move your state into a hosted database of some kind.
but...that reflects the underlying reality of the cloud platform. the servers that your "serverless" code runs on are generally themselves stateless.
if you were to blindly follow this "just add a Docker volume" approach to managing state, you're in for a rude awakening the moment you want to scale your "serverless" code from 1 server to 2 servers.
and unsurprisingly, the article glosses over this a few paragraphs farther down:
> You can deploy one container, or ten. Scale them. Monitor them. Keep state. Run background jobs. Use your own database.
run 10 containers...each with their own Docker volume? use my own database? what. this is blogspam nonsense.
this article is actually fairly balanced.
I've seen small teams run fairly large stuff because serverless offloads staffing requirements.
In particular, the whole blue/green CI/CD approach makes it both trickier to know what's going on, but harder to trigger an outage.
Thus, while the complexity complaints are largely on point, to label it all a "scam" is too strident.
[flagged]
> it's going to cost me close to nothing
Until it does and then you are screwed without an easy way to get out if your application is anything else than trivial. It has it's uses; a simple page/site which won't get any users and a system that needs to elastically scale forever, but in between, you are just burning (your company) money for no reason other than HN said it is the best (a lot of people here are working at FAANG; it's baffling why anyone would take any advice from architecture ran at a company with 100k more employees than their own startup with all this cloud / devops garbage and resulting costs and headaches).
I’m not sure I understand this argument. Anything that runs in a lambda can also be run outside of a lambda. You just need to give yourself a way to invoke it or access it.
One setup I’ve used with a previous client was deploying many FastAPI lambdas, wrapped in Mangum. You can even in the same code base expose both. This way, you can deploy to a container and a lambda without changing any code.
In one case, we used APIGWv2 to give the lambda an HTTP endpoint, in the other, it’s the container that provides it. You can throw in IPv6 support by having a CF distro in front of both.
Fair critique on the promotion, but you do see that a lot round here.
I share the frustration though and the article resonated with me. I'm currently working on a project that's being inappropriately done in cloud functions, and it's just dumb. We'll never scale beyond a few hundred users, but there's a load of considerations and hoop jumping we'd have completely skipped if the org had stayed off the Kool Aid.