What an interesting and strange article. The author barely offers a definition of "systems thinking", only names one person to represent it, and then claims to refute the whole discipline based on a single incorrect prediction and the fact that government is bad at software projects. It's not clear what positive suggestions this article offers except to always disregard regulation and build your own thing from scratch, which is ... certainly consistent with the Works In Progress imprint.

The way I learned "systems thinking" explicitly includes the perspectives this article offers to refute it - a system model is useful but only a model, it is better used to understand an existing system than to design a new one, assume the system will react to resist intervention. I've found this definition of systems thinking extremely useful as a way to look reductively at a complex system - e.g. we keep investing in quality but having more outages anyway, maybe something is optimizing for the wrong goal - and intervene to shift behaviour without tearing down the whole thing, something this article dismisses as impossible.

The author and I would agree on Gall's Law. But the author's conclusion to "start with a simple system that works" commits the same hubris that the article, and Gall, warn against - how do you know the "simple" system you design will work, or will be simple? You can't know either of those things just by being clever. You have to see the system working in reality, and you have to see if the simplicity you imagined actually corresponds to how it works in reality. Gall's Law isn't saying "if you start simple it will work", it's saying "if it doesn't work then adding complexity won't fix it".

This article reads a bit like the author has encountered resistance from people in the past from people who cited "systems thinking" as the reason for their resistance, and so the author wants to discredit that term. Maybe the term means different things to different people, or it's been used in bad faith. But what the article attacks isn't systems thinking as I know it, more like high modernism. The author and systems thinking might get along quite well if they ever actually met.

I didn't feel like he was refuting the whole discipline. Rather, he seems to admire Forrester and the whole discipline. The argument just seems to be, even with great systems thinking, you can't build a complex system from scratch and that existing complex systems are often hard to fix.

The title of the article is an intentional conflation of "systems thinking" with "magical thinking", which is not a compliment.

Couldn't one interpret "magical systems thinking" as a fallacy that people may commit when applying systems thinking? More broadly, I find some of the comments here rather harsh, also considering that many observations in the article are intuitively true for anyone whose ever been exposed to bureaucracy on the meta-level.

One could interpret the title that way, but not consistently with the rest of the article, which includes assertions like "in the realm of societies, governments and economies, systems thinking becomes a liability".

I think there's plenty to agree with in the article's descriptions of failure and hubris. What the critical commenters are taking issue with is that the article blames those symptoms on a straw man. It's a persuasive article, not a historical review, so it's reasonable to debate its conclusion and reasoning as well as its supporting evidence.

Exactly, it's a fallacy of systems thinking but it's not intrinsic to it. In fact, system thinkers tend to understand that complex systems, are, well, complex and not easy to reason about.

But this is a core idea in systems thinking which the author claims is ignored by it

There is something about Club of Rome to systems thinking that is similar to the Dijkstra's observation about Basic and programming.

Articles debunking them are always full of fundamental misunderstandings about the discipline. (The ones supporting them are obviously wrong.) And people focusing on understanding the discipline never actually refer to them in any way.

That's because its much easier to write an article dissing on a discipline if you don't really understand any of it.

Time for me to author my devastating takedown of micro-botany…

  >claims to refute the whole discipline based on a single incorrect prediction
I'm not so sure about "incorrect" even. The retrospectives have been generally positive.[0][1]

Citing economic growth as a counterexample is pretty silly, because in the Limits models many parameters look great right up until the collapse.[2]

I would encourage everyone to see how the original authors describe their findings[2][3], rather than (potentially motivated) retellings.

[0] https://www.livescience.com/collapse-human-society-limits-to...

[1] https://www.theguardian.com/commentisfree/2014/sep/02/limits...

[2] https://www.youtube.com/watch?v=hhSpzQhvFS8

[3] https://www.youtube.com/watch?v=Pc3SWj-hjTE

Yeah, what they are attempting mg to do in the span of one short essay is equivalent to trying to discredit an entire field of inquiry. Even if you don't think the field is worth anything, it should be obvious that it will take a lot of research and significant argumentation to accomplish that goal, this essay is lacking in both departments.

Maybe anecdotal, but in solution design I have often encountered designs that try to be generic for the sake of generality, also designs that complicate a simple repeatable task to accommodate arbitrary potential complications. I would argue that in many cases we like to introduce complexity to feel like we are doing something advanced. I like to design systems with a view that they do one thing only and that thing right but there is, to your point, arbitrariness and art and judgement in deciding the thing.

Applying the idea of "starting with a simple system that works"̀ to Factorio, Shapez (and now Shapez 2) is like Factorio for abstract geometric shapes and colors.

It's got all the essential elements of Factorio that make it so interesting and compelling, which apply to so many other fields from VLSI design to networking to cloud computing.

But you mine shapes and colors and combine them into progressively more complex patterns!

https://en.wikipedia.org/wiki/Shapez_2

What Factorio started, grew up into entire genre. Various ideas are vigorously explored by various games.

> The author barely offers a definition of "systems thinking", only names one person to represent it, and then claims to refute the whole discipline based on a single incorrect prediction and the fact that government is bad at software projects.

All valid criticisms, but somehow it sounds exactly like something a member of inept bureaucracy would say.

This seems lazy. It's ad hominem but not even, since you don't know what inept bureaucracy I am defending. Is there any argument that you couldn't level this accusation at?

Apologies. It wasn't intended as ad hominem. I was just describing general vibe of your comment, at least how I perceived it.

When inept bureaucracy is put in the spotlight usually someone pops up to defend how much of important work they are doing and how the things they deal with are just so complicated. And how criticisms are unfair and unfounded.

> assume the system will react to resist intervention

Systems don't do that. Only constituents who fear particular consequences do.

Systems also don't care about levels of complexity. Especially since it's insanely hard to actually break systems that are held together by only the "what the fuck is going on, let's look into that" kind. Hours, days, weeks, later, things run again. BILLIONS lost. Oh, we wish ...

At the end of the day, the term Systems Thinking is overloaded by all the parts that have been invented by so called economists and "the financial industry", which makes me chuckle every time now that it's 2025 and oil rich countries have been in development for decades, the advertisement industry is factory farming content creators and economists and multi-billionaires want more tikktoccc and instagwam to get into the backs of teen heads.

If you are a SWE, systems architect or anything in that sphere, please, ... act like you care about the people you are building for ... take some time off if you can and take care of must be taken care of, ... it's just systems, after all.

> Systems don't do that. Only constituents who fear particular consequences do.

These are part of a system. Ignoring these components gives you an incomplete model.

(All models are incomplete, by definition, but ignoring constituents that have a major influence greatly reduces the effectiveness of your model)

You make a fine point. My simplified version of it is that there is no such thing as an isolated system. Things change. A system optimized for one environment is likely to fail when things change. Most of the hugely successful firms of today focus more on controlling their environment than on developing a capacity to adapt to unforeseeable consequences of unforeseen changes in their environment, even the ones that they cause themselves.

I think we were not using the same definition of “system” :)

> there is no such thing as an isolated system.

Very true.

Look no further than evolutionary biology, you see this all the time where extinctions occur because the environment changes such that the system is no longer optimal.

> where extinctions occur because the environment changes such that the system is no longer optimal

What if we looked at the extinct species as constituents that have been removed because they were obsolete in the system? That way, the system remains optimal, without resisting change.

The system of humanity requires a lot. We used to say "survival of the fittest", which meant survival of the fittest and the "most aware", meaning being able to distinguish which survival strategy is the most viable for a given organism.

Fight, flight, freeze, dominance, independence, submission, DIY, DOBUY; the latter are especially interesting given how reduced information about the requirements and the sensitivities of the individual body can cripple your organs to a point that is more beneficial for some interest group than it is to you; in other words: someone can make sure you are stupid enough to be abused for some specific task until you can be discarded of. At this point we don't know if the system will survive more than one period because of the interest group or suffer within one or more periods because of that interest group.

In evolutionary biology, more symbiotic organisms and systems survived a lot longer that those who were less symbiotic, on scales that modern humans can't put into adequate numbers yet.

Isolated systems do exist. They can be isolated and they can self-isolate for various reasons and by various means. This happens even in species/systems we mostly consider mostly unconscious while definitely sentient and aware.

Wear and tear and maintenance, leeching and seeding, putting info and questions into words and lurking; none of these really attach a system to another by default, by design or via behavior, reward and punishment. The rules go beyond that and stretch longer time frames than we account for.

Thinking out loud here, btw.

>> > assume the system will react to resist intervention

Systems don't do that. Only constituents who fear particular consequences do. <<

For example, the human body is pretty decent at maintaining a fixed internal temperature.

Cities supposedly maintain a fairly stable transit time even as transit infrastructure improves.

The article seems to think that systems thinking only applies at a certain lower scale. Even bringing up the bullwhip effect, and talking about it in certain kinds of systems is itself systems thinking, just not at the subcomponent level which doesn't show it. Systems thinking is about interactions and context.

Where are the limits of optimization? There are no such things as "systems", these are arbitrary concepts. Where does any system end? Odum learned the hard way and I suspect CS is simply models of models that hide the interconnected nature as a way of isolating values, making money, and claiming the system works.

The deeper question is why create models of a reality in which all models are wrong, but some extract value long enough to create both ecological collapse and poverty? These are the end states or even goals of models in a universe with limited resources to surfaces of planets.

Each optimization is designed to create dystopic conditions. This is obvious.