We use GCP at work, it works well for what we use it (VMs, container storage, cloud file storage), I wish they would stop deprecating stuff though, it just causes developer busy work with no additional value.

> it just causes developer busy work with no additional value.

Anyone remotely familiar with Google as a third party developer will notice the pattern: this will ramp up until it is almost your entire job simply dealing with their changes, most of which will be not-quite-actual-fixes to their previous round of changes.

This is not unique to Google, but it is a strategy employed to slow down development at other companies, and so help preserve the moat.

Old-timers who date back to when Joel Spolsky's early musings on the business of software development were fixtures on the HN front page will remember him using the phrase "fire and motion" for Microsoft's old strategy of constantly making changes so that everyone trying to keep up was—like the Red Queen—running as fast as they could but not getting anywhere.

> Watch out when your competition fires at you. Do they just want to force you to keep busy reacting to their volleys, so you can’t move forward?

> Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.

—Joel Spolsky, "Fire and Motion," 2002

https://www.joelonsoftware.com/2002/01/06/fire-and-motion/

There’s a solution to the Fermi paradox somewhere in this string of comments.

> ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET

Notably all of these still work, even if not getting new updates.

> it is a strategy employed to slow down development at other companies, and so help preserve the moat.

Worked at Google for 7 years, and your post reminds me it is time to share a secret: it is Koyaanisqatsi* and people's base instincts unbridled, no more. There is no quicker route to irrelevancy than being the person who cares about something from last years OKRs.

* to be clear, s/Koyaanisqatsi/too big to exist healthily and yet it does exist -- edited this in, after I realized it's unclear even if you watched the movie and know the translation of the word

If they actually incentivized a group to support stability and continuity among enterprise customers, they would probably be able to diversify their revenue away from ads. Microsoft understands this…

The real sick thing is it doesn't matter, right? Like we're commenting on an article about how they won the day yesterday and Cloud revenue continues to skyrocket.

To be clear, I agree with you, and am puzzled by the lack of consequences from the real world for the stuff I saw. But that was always the mystery of Google to me, in a nutshell: How can we keep getting away with this?

> How can we keep getting away with this?

A large part of that is the Google-are-super-geniuses PR effort. Anyone pointing out that Google's products don't reflect this to their boss faces having their own credibility reduced instead.

If it's so obvious, and Google supposedly know this internally, and can obviously tell by others that some are avoiding Google because they're fast at sunsetting services, why are they not doing anything about it?

Imaginary conversation between an honest VP and earnest year 0 me, here's what the VP says: "We definitely care about deprecations: now tell me how to accomplish that with Sundar's Focus™* Agenda over the last 2 years"

* no net headcount increases, random firings, and any new headcount should be overseas. i.e. we have the same # of people we did in 2021 with 50% more to do.

Because it’s not meaningfully hurting their bottom line.

I used to think that made sense (as sibling mentions, Spolsky's "fire and motion" thesis)... until I worked at large-ish tech company whose internal platforms also kept doing this. Heck, the platform I owned also underwent a couple cycles of this. And so a large part of our work was just doing the Red Queen's race of deprecations and migrations.

So it was definitely not "fire and motion," as there was no competition. I think platforms genuinely need to evolve as new use-cases onboard and technology progresses, and so the assumptions underpinning the platform's design and architecture no longer hold.

However, I do think a small part of the problem was also PDD: "Promotion Driven Development."

One approach to prevent being hit by "stack rot" is to build everything on top of plain Linux VMs.

Those rarely change.

Unfortunately the software deployed on top of them will.

So you either:

1) postpone all your updates for years until a bad CVE hits and you need to update or some application goes end of life and you’re screwed because updating becomes a massive exercise

2) do regular updates and patches to the entire stack, including Linux, in which case, you’re in the same position you were before with running on the stack rot treadmill

So you might’ve moved the rot to a different place, but I don’t know if you’ve reduced any of it. I’ve owned stuff deployed off of vanilla VMs and I actually found it harder to maintain because everything was a one-off.

My rationale for staying up to date aggressively is that it minimizes integration work. Basically integration work multiplies, it doesn't just accumulate. So the further you fall behind the more that can break when you finally do upgrade. And you create needlessly more work related to testing and fixing all that. Upgrading a system that was fully up to date until a few days/weeks ago is generally easy. There's only so much that changes. Doing the same to something that was last touched five years ago can be a bit painful. Apis that no longer exist. Components that are no longer supported. Code that no longer compiles. Etc.

I see a lot of teams being overly conservative with keeping their stuff up to date running with years out of date stuff with lots of known & fixed bugs of all varieties, performance issues that have long since been addressed, etc. All in the name of stability.

I treat anything that isn't up to date as technical debt. If an update breaks stuff, I need to know so I can either deal with it or document a work around or a (usually temporary) version rollback. While that happens, it doesn't happen a lot. And I prefer knowing about these things because I've tried it over being ignorant of the breakage because I haven't updated anything in years. It just adds to the hidden pile of technical debt you don't even know you have. Ignorance is not an excuse for not dealing with your technical debt. Or worse compounding it by building on top of it and creating more technical debt in the process.

Dealing with small changes over time is a lot less work than with dealing with a large delta all at once. It's something I've been doing for years. If I work on any of my projects, the first thing I do is update dependencies. Make sure stuff still works (tests). Make sure deprecated APIs are dealt with.

If you’re willing to put in the maintenance work, you’ll probably be in good shape whether you’re on plain VMs or a snazzy cloud provider managed service.

If business understands that you need time to work on these things :’)

RedShift1 complains that GCP is "deprecating stuff". I wouldn't put doing regular updates in the same problem category as having to deal with part of your stack disappearing.

To me "I wish they would stop deprecating stuff" sounds like any part of the stack has something like a 1% or even 10% chance in any given year to be shut off.

I would expect that by carefully choosing your stack from open source software in the Debian repos, you can bring the probability of any given part being gone with no successor to less than 0.1% per year. As an example - could you imagine Python becoming unavailable in 2026? Or SQLite? Docker?

Fair - I’m not very experienced with GCP, but I’ve seen AWS keep the deprecation treadmill moving as well.

In general, if I’m going to be maintaining stuff, I guess I’d rather be maintaining cloud than like… old Solaris or something.

But then you have to maintain it yourself, which overall usually wilk be more work than just migrating from time to time

Your cost just shifts elsewhere, then. Rolling your own stack from Linux up is a big endeavor too.

Obligatory Steve Yegge rant about the Google deprecation treadmill:

https://steve-yegge.medium.com/dear-google-cloud-your-deprec...

> Dear RECIPIENT,

> Fuck yooooouuuuuuuu. Fuck you, fuck you, Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time. It’s costing us time and money to support our shit, and we’re tired of it, so we’re not going to support it anymore. So drop your fucking plans and go start digging through our shitty documentation, begging for scraps on forums, and oh by the way, our new shit is COMPLETELY different from the old shit, because well, we fucked that design up pretty bad, heh, but hey, that’s YOUR problem, not our problem.

> We remain committed as always to ensuring everything you write will be unusable within 1 year.

> Please go fuck yourself,

> Google Cloud Platform

Great read, thanks for sharing

> it works well for what we use it

How do you handle the slow console?

I have a bunch of scripts that do the work for me so I can just run that in the background and do something else, and for one off tasks grumble and mumble a bit.

Microsoft, especially the Partner site for Microsoft Store and so on, is also exceptionally slow. Regular Azure not much better.

I guess the big providers learn from each other.

Do everything in terraform, never touch the console?

Oh my god I thought I was the only one. Everyone says that you should be using the command line anyway, but I'd rather use a GUI but it's disgustingly slow.

The CLI also takes like 5 seconds to do anything

It surprises me that it's built in Python (as is AWS'), which doesn't seem like a very appropriate language for exactly this reason. Go would seem much more apposite.

The execution speed of the language the CLI is written in doesn't matter. Any difference is dwarfed by the slowness of network calls, specifically their API response times.

These CLIs are just thin request clients a la curl. The code execution time is peanuts in comparison to the request latency

It's not negligible versus a single request. Even `gcloud --help` takes over a second on this machine - actually getting it to do a simple list request takes almost no time longer over any reasonable connection.

Plus Python is notoriously not easy to deploy for - a Go (or Rust or whatever) binary would have almost no dependencies to worry about.

[deleted]

It's not just busy work, it's like being chained to a sandpaper treadmill especially in some specific "hot" domains like AI. It feels like you can build something with it and 3 months later you've got to update your code because half the dependencies are deprecated.

In AI, whatever you wrote is going to be deprecated in theee months, and in six months SOTA LLMs will one-shot it directly. That's what it means for a domain to be "hot". AI is currently the largest and most intense global R&D project in the history of humanity. So for this one field, your complaint makes no sense.

It helps companies develop a vendor agnostic culture, its a great thing.