The direct cost is the easy part. The more insidious part is that you're now cultivating a growing staff of technologists whose careers depend on doing things the AWS way, getting AWS certified to ensure they build your systems the AWS Well Architected Way instead of thinking themselves, and can upsell you on AWS lock-in solutions using AWS provided soundbites and sales arguments.
("Shall we make the app very resilient to failure? Yes running on multiple regions makes the AWS bill bigger but you'll get much fewer outages, look at all this technobabble that proves it")
And of course AWS lock-in services are priced to look cheaper compared to their overpricing of standard stuff[1] - if you just spend the engineering effort and IaC coding effort to move onto them, this "savings" can be put to more AWS cloud engineering effort which again makes your cloud eng org bigger and more important.
[1] (For example implementing your app off containers to Lambda, or the db off PostgreSQL to DynamoDB etc)
> The direct cost is the easy part
I don't think it is easy. I see most organizations struggle with the fact that everything is throttled in the cloud. CPU, storage, network. Tenants often discover large amounts of activity they were previously unaware of, that contributes to the usage and cost. And there may be individuals or teams creating new usages that are grossly impacting their allocation. Did you know there is a setting in MS SQL Server that impacts performance by an order of magnitude when sending/receiving data from the Cloud to your on-premises servers? It's the default in the ORM generated settings.
Then you can start adding in the Cloud value, such as incomprehensible networking diagrams that are probably non-compliant in some way (guess which ones!), and security? What is it?
Yes. Cloud sellers new this: Happy path for this flagship project, the shinny new object, and some additional services. After the point of no return what usually happens is, that cloud will be a replica of bare metal development.
As an Computer Science dude and former C64/Amiga coder in Senior Management of a large international Bank, I saw first hand, how cost balloon simply due to the fact, that the bank recreates and replicates its bare metal environment in the cloud.
So increasing costs while nothing changed. Imagine that: fixed resources, no test environments, because virtualisation was out of the equation in the cloud due to policies and SDLC processes. And it goes on: releases on automation? Nope, request per email and attached scan of a paper document as sign-off.
Of course your can buy a Ferrari and use it as a farm tractor. I bet it is possible with a little modification here and there.
Another fact is, that lock in plays a huge role. Once you are in it, no matter what you subscribe to, magically everything slows suddenly down, a bit, but since I am a guy who uses a time tracker to test and monitor apps, I could easily draw a line even without utilizing my Math background: enforced throtelling.
There is a difference between 100, 300 and 500ms for SaaS websites - people without prior knowledge of peceptual psychology feel it but cannot but their finger in the wound. But since we are in the cloud, suddenly a cloud manager will offer you an speed upgrade - just catered for your needs! Here, have a trial period over 3 month for free and experience the difference for your business!
I am a bit of opinionated here and really suppose, that cloud metrics analysed the banks traffic and service usage to willingly slow it down in a way, only professionals could find out. Have you promised to be lightning fast in the first place? No, that's not what the contract says. We fed you with it, but a "normal" speed was agreed upon. It is like getting a Porsche as a rental car for free when you take your VW Beetle to the dealer for a checkup. Hooked, of course. A car is a car after all. How to boil a frog? Slowly.
Of course there will be more sales and this is achilles' heel for every business and indifferent customers - easy prey.
It is a vicious cycle, almost like taxation. You cannot hide from it, no escape and it is always on the rise.
Ferrari actually makes tractors.
That's Lamborghini, isn't it?
IIRC, he only got into making cars because Enzo Ferrari disrespected him.
Does Lamborghini still do?
> Did you know there is a setting in MS SQL Server that impacts performance by an order of magnitude when sending/receiving data from the Cloud to your on-premises servers? It's the default in the ORM generated settings.
Sounds interesting, which setting is that?
Multiple Active Result Sets (MARS). During large query responses or bulk loads, "full" packets cause an additional packet to be sent over the wire with about five bytes to hold the MARS "wrapper". The net result is one full packet, and one empty packet on the wire, alternating. The performance impact in LAN latency is negligible. However on higher latency between AWS and your premises it has a terrible performance impact.
MARS isn't strictly needed for most things. Some features that requires it are ORM (EF) proxies and lazy loading. If you need MARS, there are third party "accelerators" that workaround this madness.
"MARS Acceleration significantly improves the performance of connections that use the Multiple Active Result Sets (MARS) connection option."
https://documentation.nitrosphere.com/resources/release-note...
Yeah, honestly most lazy loading and EF proxy use I have seen is more aptly named lazy coding instead. There are times when you might be running 3-4 queries to project some combination of them and want to do that in parallel, but in general if you have lazy loading enabled in EF you are holding up a sign that says “inconsistent reads happening in here”.
I use and love EF, but generally leave MARS off when possible because it is responsible for more trouble than performance gains nearly every time.
Is that not a client connection flag? MARS does not require a setting change on the server?
I think you may have misinterpreted what he said. I can see why it seems to imply a server setting but that isn't the case
> Did you know there is a setting in MS SQL Server that impacts performance by an order of magnitude when sending/receiving data from the Cloud to your on-premises servers? It's the default in the ORM generated settings
You are right. For some reason when I initially sped through the post I read it as if RDS was doing something wrong.
It is a setting in connection string in the client app.config, "MultipleActiveResultSets=true;".
"Multiple Active Result Sets=true;"
Would love to know as well.
I was about to rage at you over the first sentence, because this is so often how people start trying to argue bare metal setups are expensive. But after reading the rest: 100% this. I see so many people push AWS setups not because it's the best thing - it can be if you're not cost sensitive - but because it is what they know and they push what they know instead of evaluating the actual requirements.
Well, they aren't wrong about the bare metal either: Every organization ends up tied to their staff, and said staff was hired to work on the stack you are using. People end up in quite the fights because their supposed experts are more fond of uniformity and learning nothing new.
Many a company was stuck with a datacenter unit that was unresponsive to the company's needs, and people migrated to AWS to avoid dealing with them. This straight out happened in front of my eyes multiple times. At the same time, you also end up in AWS, or even within AWS, using tools that are extremely expensive, because the cost-benefit analysis for the individuals making the decision, who often don't know very much other than what they use right now, are just wrong for the company. The executive on top is often either not much of a technologist or 20 years out of date, so they have no way to discern the quality of their staff. Technical disagreements? They might only know who they like to hang out with, but that's where it ends.
So for path dependent reasons, companies end up making a lot of decisions that in retrospect seem very poor. In startups if often just kills the company. Just don't assume the error is always in one direction.
Sure but I have seen the exact same thing happen with AWS.
In a large company I worked the Ops team that had the keys to AWS was taking literal months to push things to the cloud, causing problems with bonuses and promotions. Security measures were not in place so there were cyberattacks. Passwords of critical services lapsed because they were not paying attention.
At some point it got so bad that the entire team was demoted, lost privileges, and contractors had to jump in. The CTO was almost fired.
It took months to recover and even to get to an acceptable state, because nothing was really documented.
I can’t believe the CTO wasn’t fired for that.
The CTO was the one holding the bonus and promotions for tech, so he just shifted the blame down when it was "investigated".
On the other hand it's not hard to believe that the CEO and the board are as sleepy as the CTO here. And the whole management team.
The worst one was when a password for an integration with the judicial system expired. They asked the DevOps to open their email and there were daily alerts for six months. The only reason they found this happened was because a few low level operators made a big thing out of it.
I don't like talking about "regulatory capture" but this is the only reason this company still exists. Easy market when there's almost no competition.
The entire value proposition of AWS vs running one's own server is basically this: is it easier to ask for permission, or forgiveness? You're asking for permission to get a million dollars worth of servers / hardware / power upgrades now, or you're asking for forgiveness for spending five million dollars in AWS after 10 months. Which will be easy: permission or forgiveness?
I had not thought of it this way, but interesting point. I have seen this as well.
> Many a company was stuck with a datacenter unit that was unresponsive to the company's needs
I'd like to +1 here - it's an understated risk if you've got datacenter-scale workloads. But! You can host a lot of compute on a couple racks nowadays, so IMHO it's a problem only if you're too successful and get complacent. In the datacenter, creative destruction is a must and crucially finance must be made to understand this, or they'll give you budget targets which can only mean ossification.
In orgs I have seen this it is usually a symptom of the data center unit being starved of resources. It’s like they have only been given the choice of on prem but ridiculous paperwork and long lead times or pay 20x for cloud.
Like can’t we just give the data center org more money and they can over provision hardware. Or can we not have them use that extra money to rent servers from OVH/Hetzner during the discovery phase to keep things going while we are waiting on things to get sized or arrive?
I feel like companies are unreasonably afraid of cost up front, never mind that they’re going to pay more for cloud over the next 6 months, spending 6x monthly cloud cost on a single server makes them hesitate.
It’s how they always refuse to spend half my monthly salary on the computer I work on, and instead insist I use an underpowered windows machine.
Blame finance and accounting... Rent compute in the cloud can be immediately expensed against revenues. Purchasing equipment has to be depreciated over a few years. Also why spending $$$$$ on labor (salaries) to solve an ops issue rather than spending $$$$ on some software to do it happens. If the business relies on the software it looks like an ever ongoing cost of operating the business. Spending more on labor to juggle the craziness can "hide" that and make the business look more attractive to investors... Cutting labor costs is easier to improve the bottom line (in the short term).
You also don't need to commit to upfront costs. You can easily rent, rent to own/lease these resources.
The problem is if you over-provision and buy 2x as many resources as you need, this looks bad from a utilization standpoint. If you buy 2x as expensive cloud solutions and “auto scale” you will have a much higher utilization for the same coat.
> Or can we not have them use that extra money to rent servers from OVH/Hetzner
Or just use Hetzner for major performance at low cost... Their apis and stuff make it look like its your datacenter.
Your comment also jogged my memory of how terrible bare metal days used to be. I think now with containers it can be better but the other reason so many switched to cloud is we don’t need to think about buying the bare metal ahead of time. We don’t need to justify it to a DevOps gatekeeper.
That so many people remember bare metal as of 20+ years ago is a large part of the problem.
A modern server can be power cycled remotely, can be reinstalled remotely over networked media, can have its console streamed remotely, can have fans etc. checked remotely without access to the OS it's running etc. It's not very different from managing a cloud - any reasonable server hardware has management boards. Even if you rent space in a colo, most of the time you don't need to set foot there other than for an initial setup (and you can rent people to do that too).
But for most people, bare metal will tend to mean renting bare metal servers already configured anyway.
When the first thing you then tend to do is to deploy a container runtime and an orchestrator, you're effectively usually left with something more or less (depending on your needs) like a private cloud.
As for "buying ahead of time", most managed server providers and some colo operators also offer cloud services, so that even if you don't want to deal with a multi-provider setup, you can still generally scale into cloud instances as needed if your provider can't bring new hardware up fast enough (but many managed server providers can do that in less than a day too).
I never think about buying ahead of time. It hasn't been a thing I've had to worry about for a decade or more.
> A modern server can be power cycled remotely, can be reinstalled remotely over networked media, can have its console streamed remotely, can have fans etc. checked remotely without access to the OS it's running etc. It's not very different from managing a cloud - any reasonable server hardware has management boards. Even if you rent space in a colo, most of the time you don't need to set foot there other than for an initial setup (and you can rent people to do that too).
All of this was already possible 20 years ago, with iLO and DRAC cards.
Yes, that's true, but 20 years ago a large proportion of lower end servers people were familiar with didn't have anything like it, and so a whole lot even of developers who remember "pre-cloud" servers have never experienced servers with them.
You are right but I just think people miss the history when we talk about moving to the cloud. It was not that long ago at a reasonable size Bay Area company, I would need to justify new metal to be provisioned to standup a service I was tasked with.
The catch is that bare metal is SO cheap and performant that you can buy legions of it and have it lying around. And datacenters, their APIs and whatnot advanced so much that you can even have automations that automatically provision and set up your bare metal servers. With containers, it gets even better.
And, lets face it - arent you already overprovisioning on the cloud because you cant risk your users waiting 1-2 minutes until your new nodes and pods get up? So basically the 'autoscaling' of cloud has always been a myth.
That memory is part of the problem: it doesn't reflect today's reality. You can have an IT ops team that buys and sets up servers, and then sets up (perhaps) Kubernetes and a nice CI/CD pipeline on top of it. They can fairly easily bill individual teams for usage, and teams have to justify their costs, just like they (hopefully!) do in any sane org that's running in the cloud.
The bad old days of begging an IT ops person for a server, and then throwing a binary over the fence at them so they can grumble while they try to get it running safely in production... yeah, no, that doesn't have to be a thing anymore.
The "we" you speak of is the problem: if your org hires actual real sysadmins and operations people (not people who just want to run everything on AWS), then "you" don't have to worry about it.
It's simple enough to hire people with experience with both, or pay someone else to do it for you. These skills aren't that hard to find.
If you hire people that are not responsive to your needs, then, sure, that is a problem that will be a problem irrespective of what their pet stack is.
Considering the rapid shift from on Prem to the cloud I think it’s clearly false that the people who knew on Prem were fighting for their little area of expertise.
In my experience, the ops folks were absolutely thrilled with the arrival of the cloud because with a trivial amount of training and a couple of certifications they had a pathway to get paid as much, if not more, than devs, especially if they rebranded as “devops engineers” instead of “ops guys”.
The only pushback against the cloud, other than some of us engineers who actually were among the first to jump on the cloud, still really loved it, but also recognized that it wasn’t the best fit for all uses and carried significant risks, were people worried about data safety.
The latter concern has largely turned out to not be a real one yet, but a decade and a half later people are finally realizing that actually there are many areas where the cloud may not be the best fit.
> said staff was hired to work on the stack you are using
Looking back at doing various hiring decisions at various levels of organizations, this is probably the single biggest mistake I've done multiple times, hiring specific people using specific technology because we were specifically using that.
You'll end up with a team unwilling to change, because "you hired me for this, even if it's best for the business with something else, this is what I do".
Once I and the organizations shifted our mindset to hiring people who are more flexible, even if they have expertise in one or two specific technologies, they won't put their head in the sand whenever changes come up, and everything became a lot easier.
Exactly. If someone has "Cloud Engineer" in the headline of their resume instead of "Devops Engineer" it's already warning and worth probing. If someone has "AWS|VMWare Engineer" in their bio, it's a giant red flag to me. Sometimes it's people just being aware where they'll find demand, but often it's indicative of someone who will push their pet stack - and it doesn't matter if it's VMWare on-prem or AWS (both purely as examples; it doesn't matter which specific tech it is), it's equally bad if they identify with a specific stack irrespective of what the stack is.
I'll also tend to look closely at whether people have "gotten stuck" specialising in a single stack. It won't make me turn them down, but it will make me ask extra questions to determine how open they are to alternatives when suitable.
[dead]
The weird thing is I'm old enough to have grown up in the pre-cloud world, and most of the stuff, like file servers, proxies, dbs, etc. isn't any more difficult to set up than AWS stuff, it's just that the skills are different
Also there's a mindset difference - if I gave you a server with 32 cores you wouldn't design a microservice system on it, would you? After all there's nowhere to scale to.
But with AWS, you're sold the story of infinite compute you can just expect to be there, but you'll quickly find out just how stingy they can get with giving you more hardware automatically to scale to.
I don't dislike AWS, but I feel this promise of false abundance has driven the growth in complexity and resource use of the backend.
Reality tends to be you hit a bottleneck you have a hard time optimizing away - the more complex your architecture, the harder it is, then you can stew.
> But with AWS, you're sold the story of infinite compute you can just expect to be there, but you'll quickly find out just how stingy they can get with giving you more hardware automatically to scale to.
This is key.
Most people never scale to a size where they hit that limit, and in most organisations where that happens, someone else have to deal with it, and so most developers are totally unaware of just how fictional the "infinite scalability" actually is.
Yet it gets touted as a critical advantage.
At the same time, most developers have never ever tried to manage modern server harware, and seem think it is somehwat like managing the hardware they're using at home.
But that limit is well below on what you could get even in a gaming machine (AWS cpus are SMT threads, so a 32 core machine is actually 64 cpus by AWS) - you can get that in a high end workstation, and I'd guess that's way more power than most people end up using even in their large-ish scale AWS projects.
> AWS cpus are SMT threads
Not on the AMD machines from m7 (and the others which share the same architecture)
>I see so many people push AWS setups not because it's the best thing - it can be if you're not cost sensitive - but because it is what they know and they push what they know instead of evaluating the actual requirements.
I kinda feel like this argument could be used against programming in essentially any language. Your company, or you yourself, likely chose to develop using (whatever language it is) because that's what you knew and what your developers knew. Maybe it would have been some percentage more efficient to use another language, but then you and everyone else has to learn it.
It's the same with the cloud vs bare metal, though at least in the cloud, if your using the right services, if someone asked you tomorrow to scale 100x you likely could during the workday.
And generally speaking if your problem is at a scale where baremetal is trivial to implement, its likely we're only taking about a few hundred dollars a month being 'wasted' in AWS. Which is nothing to most companies, especially when they'd have to consider developer/devops time.
> if someone asked you tomorrow to scale 100x you likely could during the workday.
I've never seen a cloud setup where that was true.
For starters: Most cloud providers will impose limits on you that often means going 100x would involve pleading with account managers to have limits lifted and/or scrounding a new, previously untested, combination of instance sizes.
But secondly, you'll tend to run into unknown bottlenecks long before that.
And so, in fact, if that is a thing you actually want to be able to do, you need to actually test it.
But it's also generally not a real problem. I more often come across the opposite: Customers who've gotten hit with a crazy bill because of a problem rather than real use.
But it's also easy enough to set up a hybrid setup that will spin up cloud instances if/when you have a genuine need to be able to scale up faster than you can provision new bare metal instances. You'll typically run an orchestrator and run everything in containers on a bare metal setup too, so typically it only requires having an auto-scaling group scaled down to 0, and warm it up if load nears critical level on your bare metal environment, and then flip a switch in your load balancer to start directing traffic there. It's not a complicated thing to do.
Now, incidentally, your bare metal setup is even cheaper because you can get away with a higher load factor when you can scale into cloud to take spikes.
> And generally speaking if your problem is at a scale where baremetal is trivial to implement, its likely we're only taking about a few hundred dollars a month being 'wasted' in AWS. Which is nothing to most companies, especially when they'd have to consider developer/devops time.
Generally speaking, I only relatively rarely work on systems that cost less than in the tens of thousands per month and up, and what I consistently see with my customers is that the higher the cost, the bigger the bare-metal advantage tends to be as it allows you to readily amortise initial setup costs of more streamlined/advanced setups. The few places where cloud wins on cost is the very smallest systems, typically <$5k/month.
> if your using the right services, if someone asked you tomorrow to scale 100x you likely could during the workday.
"The right services" is I think doing a lot of work here. Which services specifically are you thinking of?
- S3? sure, 100x, 1000x, whatever, it doesn't care about your scale at all (your bill is another matter).
- Lambdas? On their own sure you can scale arbitrarily, but they don't really do anything unless they're connected to other stuff both upstream and downstream. Can those services manage 100x the load?
- Managed K8s? Managed DBs? EC2 instances? Really anything where you need to think about networking? Nope, you are not scaling this 100x without a LOT of planning and prep work.
> Nope, you are not scaling this 100x without a LOT of planning and prep work.
You're note getting 100x increase in instances without justifying it to your account manager, anyway, long before you figure out how to get it to work.
EC2 has limits on the number of instances you can request, and it certainly won't let you 100x unless you've done it before and already gone through the hassle to get them to raise your limits.
On top of that, it is not unusual to hit availability issues with less common instance types. Been there, done that, had to provision several different instance types to get enough.
I hit it quite frequently with a particularly popular eks node instance type in us-east-1 (of course). I’m talking requesting like 5-6 instances, nothing crazy. Honestly, I wonder if ecs or fargate have the same issue.
So, I was around back then and am around now as a principal and this comment doesn't really pass the reality sniff test.
Its a lot worse than this in terms of AWS cost for apps that often barely any people use. They're often incorrectly provisioned and the AWS bill ends up in the hundreds of thousands or millions and could have been a few thousand in bare metal on Hetzner with a competent sysadmnin team. No, its not harder to administer bare metal. No, its not less reliable. No, its not substantially harder to scale for most companies to do bare metal(large fortune 50 excluded).
I've been seeling a cost-reduction service for a while, and the hardest aspect of selling it is that so many people on the tech side doesn't care because they don't seem to be held to account to the drain they cause.
I can go in and guarantee that my fees are capped at a few months worth of their savings, and still it's a hard sell with a lot of teams who are perfectly happy to keep burning cash.
And I'll note, as much as I love to get people off AWS, most of the times people can massively reduce their bill just by using AWS properly as well, so even if bare metal was bad for their specific circumstances they're still figuratively setting fire to piles of cash.
> people push AWS setups not because it's the best thing - it can be if you're not cost sensitive
This is so weird to me, because if you're running a company, you should be cost-sensitive. Sure, you might be willing to spend extra money on AWS in the very beginning if it helps you get to market faster. But after that, there's really no excuse: profit margin should be a very important consideration in how you run your infrastructure.
Of course, if you're VC backed, maybe that doesn't matter... that kind of company seems to mainly care about user growth, regardless of how much money is being sent to the incinerator to get it.
I was checking the appetite for some cost reduction service a while back and one of the responses I got was from a CTO telling me he didn't need to care about cost because they'd just gotten funded and had lots of cash in the bank.
It's perfectly valid to not want to put engineering effort into it at the "wrong time" when delivering features will give you a higher return, but it came across as a lack of interest in paying attention to cost at all.
I saw a lot of that attitude from the tech side when I was looking at this. A lot of the time the CFO or CEO would be appallled, because they were actually paying attention to burn rates, but where often getting stonewalled by the tech side who'd often just insist all the costs were necessary - even while they often didn't know what they were spending or on what.
I only work at companies that are using cloud because I hate administering systems and I hate dealing with system administrators when I need resources.
Unfortunately it's not, and it gets more difficult the more cloud-y your app gets.
You can pay for EC2+EBS+network costs, or you can have a fancy cloud native solution where you pay for Lambda, ALBs, CloudWatch, Metrics, Secret Manager, (things you assume they would just give you, like if you eat at a restaurant, you probably won't expect to pay for the parking, toilet, or paying rent for the table and seats).
So cloud billing is its own science and art - and in most orgs devs don't even know how much the stuff they're building costs, until finance people start complaining about the monthly bills.
I have a fair idea of how much our cloud costs. However, others in the org don't want me to actually see how much it costs. Which is bonkers. I can help identify how we could rearchitect or investigate lowering our costs, if I know what they are and where to look first. No point guessing and accidentally optimizing something that will be an order of magnitude or more less beneficial.
We run regular FinOps meetings within departments, so everyone’s aware. I think everyone should. But it’s a lot of overhead of course. So a dev is concerned not only with DevOps anymore but with DevSecFinOps. Not everyone can cope with so many aspects at once. There’s a lot of complexity creep in that.
Yeah, AWS has the billing panel, that's where I usually discover that after I make a rough estimate on how much the thing I'm building should cost by studying the relevant tables, I end up with stuff costing twice as much, because on top of the expected items there's always a ton of miscellaneous stuff I never thought about.
[dead]
I have Claude, ChatGPT, and Gemini analyze our AWS bills and usage metrics once a month and they are surprisingly good at finding savings.
My manager wants me to make this silly AWS certification.
Let me go on a tangent about trains. In Spain before you board a high-speed train you need to go though full security check, like on an airport. In all other EU countries you just show up and board, but in Spain there's the security check. The problem is that even though the security check is an expensive, inefficient theatre, just in case something does blow up, nobody wants to be the politician that removed the security check. There will be no reward for a politician that makes life marginally easier for lots of people, but there will be severe punishment for a politician that is involved in a potential terrorist attack, even if the chance of that happening is ridiculously small.
This is exactly why so many companies love to be balls deep into AWS ecosystem, even if it's expensive.
Nobody gets fired for buying IB^H^H AWS
> In all other EU countries you just show up and board, but in Spain there's the security check
Just for curiosity's sake, did any other EU countries have any recent terrorist attacks involving bombs on trains in the capital, or is Spain so far alone with this experience?
London had the tube bombings, but there is no security scanning there.
AFAIK, there is no security scanning on the metro/"tube" in Spain either, it's on the national train lines.
Edit: Also, after looking it up, it seems like London did add temporary security scanners at some locations in the wake of those bombings, although they weren't permanent.
Russia is the only other European country besides Spain that after train bombings added permanent security scanners. Belgium, France and a bunch of other countries have had train bombings, but none of them added permanent scanners like Spain or Russia did.
Not true, France had this on the train to the Netherlands (Thalys) after some crazy attacked some passengers in that train. They also added electronic gates to most high speed trains in many large stations.
Notice how these inefficient processes create large, compact lines of passengers, which would made the casualties much worse in case of an actual bomb.
Checkout Madrid 2004 terror attacks... So deadly that Spain left Afghanistan and Iraq afik.
That's exactly the event I was alluding to, good detective work :)
https://en.wikipedia.org/wiki/2015_Thalys_train_attack
How does Spain deal with trains that come in from a neighboring country?
The security check has nothing to do with protecting trains or passengers, so your question is irrelevant.
Thanks for letting me know that my question is irrelevant. Sorry for taking up your time.
You're welcome
French trains come in without any security checks.
AWS doesn’t have to be expensive.
Sure, but you outgrow the free ("trial") resources in a blink, and then it starts being expensive compared to the alternatives.
It’s a marketing trap. But also a job guarantee since everyone’s in the same trap. You got a couple cloud engineers or "DevOps" that lobby for AWS or any other hyperscaler, NaiveDate managers that write down some decision report littered with logical fallacies, and a few years in the sink cost is so high you can’t get off of it, and instead of doing productivity work you’re sitting in myriads of FinOps meetings, where even fewer understand what’s going on.
Engineering mangers are promised cost savings on the HR level. Corporate finance managers are promised OpEx for CapEx trade-off, the books look better immediately. Cloud engineers are embarking on their AWS journey of certification being promised an uptick to their salaries. It’s a win/win for everyone, in isolation, a local optimum for everyone, but the organization now has to pay way more than it—hypothetically—would have been paying for bare metal ops. And hypothetical arguments are futile.
And it lends itself well to overengineering and the microservices cargo cult. Your company ends up with a system distributed around the globe across multiple AZs per region of business operations, striving to shave off those 100ms latency off your clients’ RTT. But it’s outgrown your comprehension, and it’s slow anyway, and you can’t scale up because it’s expensive. And instead of having one problem, you now have 99 and your bill is one.
All great points. I have seen in company of smart people CIO/CTO would freely up admit "Look we know cloud may not be cheap or easier to manage but this is the direction we have taken since we are getting out of owning or managing hardware/datacenter"
So it is not like one can dazzle decision makers with any logic or hard data. They are just announcing the decision while calling it a robust discussion over pros and cons of on-prem vs cloud placement.
Yep. I’ve also seen managerial people worship AWS sales reps as oracles, misconstruing ordinary sales meetings with them as something divine, in which they would disclose a lot of company’s IP in awe for them, just to listen to some blabbing superficial truisms. I mean, ChatGPT could tell you more. To add insult to that, the managerial people wouldn’t listen to their own senior, staff, principal engineers, and prefer to follow what the AWS reps told them.
It’s really disturbing how the human factor controls decision making in corporations.
For my peace of mind, I chose a sane path - if the company as an entity decides to do AWS, I will do my best to meet its goals. I’ve got all Professional and Specialty certs. It’s the human nature. No purpose in tilting at windmills.
> For my peace of mind, I chose a sane path - if the company as an entity decides to do AWS, I will do my best to meet its goals...
Amen to that.
Any kind of performance improvement, monitoring work I did for my applications has met with indifference or derision from managers. Because only if I had put efforts in cloud migration we could be "Horizontal Pod Scaling" for performance and fully managed Datadog console for monitoring the services.
Yeah our company has recently decided to go all cloud (we're always so late with this stuff, especially now that a moment back from cloud has already started emerging).
But for me it's totally ruining my job to be honest. I like technology because it enables me to make things. I don't want to become an aws or azure specialist and learn what tickboxes the overlords at Amazon and Microsoft allow me to click. Screw that. That's nothing to do with technical knowledge, it's just about being a corporate drone. In my particular case it's Microsoft. Another problem with this is that they know they own everything in our company now so they're starting to treat us as employees, giving us things to do. Like promoting their features inside our company. I mean, they're a vendor FFS. They should answer to us.
A lot of my colleagues are really motivated with this shit and doing all the certs. Many are even becoming Microsoft evangelists and get pissed if I criticise someone. I'm looking for other options in the company now where I can actually do something technical again.
I understand there might be no bare metal work left in this company but in that case I'll just want to do something else. I don't want to be some goon that links their entire career to using the products of one big tech company. And also I think Microsoft and Amazon are horrible companies to work with as a customer. So me as a techie I just don't want to work at that anymore. What will remain are a lot of yes men who know how to click boxes.
And yeah we're not even doing anything smart or taking advantage of what the cloud offers. We just lifted all the physical stuff to compute instances that run 24/7.
My last team decided to hand manage a Memcached cluster because it cost half as much as an unmanaged service versus AWS’s alternative. Don’t know how much we really saved versus opportunity cost on dev time though. But it’s close to negative.
One of the issues there is that pricing a managed service deprives your people or gaining extra experience. There’s a synergy over time, the more you manage yourself. But it’s totally justified to pick a managed service if it checks out for your budget. The problem I saw often emanate was bad decision making, bad opportunity cost estimation. In other words, there’s an opportunity cost to picking the managed service, too, and they offset each other more or less.
I wonder if there’s enough space for a Do Well By Doing Good company out there to provide a ladder from cheap self managed up to fully automated rolling upgrades.
Because it was mostly fine at first, but later we had some close calls when there were changes that needed to be made on the servers. By the time we managed to mess up our hand managed incremental restart process, we had several layers of cache and so accidentally wiping one didn’t murder our backend, but did throw enough alerts to cause a P2. And because we were doing manual bucketing of caches instead of consistent hashing we hit the OOMKiller a couple times while dialing in.
But at this point it was difficult to move back to managed.
This feels closest to digital ocean’s business model.
That should be extremely low maintenance...
As long as we didn’t need to touch the machines, nearly zero. But you gotta touch them sometime.