> The direct cost is the easy part

I don't think it is easy. I see most organizations struggle with the fact that everything is throttled in the cloud. CPU, storage, network. Tenants often discover large amounts of activity they were previously unaware of, that contributes to the usage and cost. And there may be individuals or teams creating new usages that are grossly impacting their allocation. Did you know there is a setting in MS SQL Server that impacts performance by an order of magnitude when sending/receiving data from the Cloud to your on-premises servers? It's the default in the ORM generated settings.

Then you can start adding in the Cloud value, such as incomprehensible networking diagrams that are probably non-compliant in some way (guess which ones!), and security? What is it?

Yes. Cloud sellers new this: Happy path for this flagship project, the shinny new object, and some additional services. After the point of no return what usually happens is, that cloud will be a replica of bare metal development.

As an Computer Science dude and former C64/Amiga coder in Senior Management of a large international Bank, I saw first hand, how cost balloon simply due to the fact, that the bank recreates and replicates its bare metal environment in the cloud.

So increasing costs while nothing changed. Imagine that: fixed resources, no test environments, because virtualisation was out of the equation in the cloud due to policies and SDLC processes. And it goes on: releases on automation? Nope, request per email and attached scan of a paper document as sign-off.

Of course your can buy a Ferrari and use it as a farm tractor. I bet it is possible with a little modification here and there.

Another fact is, that lock in plays a huge role. Once you are in it, no matter what you subscribe to, magically everything slows suddenly down, a bit, but since I am a guy who uses a time tracker to test and monitor apps, I could easily draw a line even without utilizing my Math background: enforced throtelling.

There is a difference between 100, 300 and 500ms for SaaS websites - people without prior knowledge of peceptual psychology feel it but cannot but their finger in the wound. But since we are in the cloud, suddenly a cloud manager will offer you an speed upgrade - just catered for your needs! Here, have a trial period over 3 month for free and experience the difference for your business!

I am a bit of opinionated here and really suppose, that cloud metrics analysed the banks traffic and service usage to willingly slow it down in a way, only professionals could find out. Have you promised to be lightning fast in the first place? No, that's not what the contract says. We fed you with it, but a "normal" speed was agreed upon. It is like getting a Porsche as a rental car for free when you take your VW Beetle to the dealer for a checkup. Hooked, of course. A car is a car after all. How to boil a frog? Slowly.

Of course there will be more sales and this is achilles' heel for every business and indifferent customers - easy prey.

It is a vicious cycle, almost like taxation. You cannot hide from it, no escape and it is always on the rise.

Ferrari actually makes tractors.

That's Lamborghini, isn't it?

IIRC, he only got into making cars because Enzo Ferrari disrespected him.

[deleted]

Does Lamborghini still do?

> Did you know there is a setting in MS SQL Server that impacts performance by an order of magnitude when sending/receiving data from the Cloud to your on-premises servers? It's the default in the ORM generated settings.

Sounds interesting, which setting is that?

Multiple Active Result Sets (MARS). During large query responses or bulk loads, "full" packets cause an additional packet to be sent over the wire with about five bytes to hold the MARS "wrapper". The net result is one full packet, and one empty packet on the wire, alternating. The performance impact in LAN latency is negligible. However on higher latency between AWS and your premises it has a terrible performance impact.

MARS isn't strictly needed for most things. Some features that requires it are ORM (EF) proxies and lazy loading. If you need MARS, there are third party "accelerators" that workaround this madness.

"MARS Acceleration significantly improves the performance of connections that use the Multiple Active Result Sets (MARS) connection option."

https://documentation.nitrosphere.com/resources/release-note...

Yeah, honestly most lazy loading and EF proxy use I have seen is more aptly named lazy coding instead. There are times when you might be running 3-4 queries to project some combination of them and want to do that in parallel, but in general if you have lazy loading enabled in EF you are holding up a sign that says “inconsistent reads happening in here”.

I use and love EF, but generally leave MARS off when possible because it is responsible for more trouble than performance gains nearly every time.

Is that not a client connection flag? MARS does not require a setting change on the server?

I think you may have misinterpreted what he said. I can see why it seems to imply a server setting but that isn't the case

> Did you know there is a setting in MS SQL Server that impacts performance by an order of magnitude when sending/receiving data from the Cloud to your on-premises servers? It's the default in the ORM generated settings

You are right. For some reason when I initially sped through the post I read it as if RDS was doing something wrong.

It is a setting in connection string in the client app.config, "MultipleActiveResultSets=true;".

"Multiple Active Result Sets=true;"

Would love to know as well.