I mean, we can definitively talk about simplicity/complexity in a fairly easy way when it comes to mathematical structures or data structures in my opinion.
For instance, a binary tree that contains just a root node is clearly simpler than a binary tree with three nodes, if we take "simple" to mean "with less parts" and complex to mean "with more parts". Similarly, a "molecule" is more complex than an "atom".
This is a useful definition, I think, because when we write computer programs they always written in some programming language, with a syntax that yields some kind of abstract tree, so ultimately we'll always have _some_ kind of graph-like nature to the computer program, both syntactically and semantically, and surely graphs also permit the same kind of complexity metrics.
I'm not saying measuring the number of nodes is _the_ way of getting at complexity, I'm just pointing out that there's no real difficulty in defining it.
Complexity means more stuff, and we simply take it as a premise that we can only fit so much stuff in our head at the same time.
I think my issue with this generalization is assuming the code itself is where complexity is measured and applied.
For example, the Quake Fast Inverse Square Root[1] takes into account nuances in how floating point numbers can be manipulated. The individual operations/actions the code takes (type casts, bit shifts, etc.) are simple enough, but understanding how it all comes together is where the complexity lies, vs just looking at the graph of operations that makes up the code.
Tools like Rubocop for Ruby take an approach like you mention, measuring cyclomatic and branch complexity in your code to determine a mathematical measurement of the complexity of that code. Determining how useful this is, is another conversation I think. I usually find enforcing rules around that code complexity measurement against your code to be subjective.
Going back to the article, the visualization of with vs without abstractions can cover aggregating the mathematical representation of the code and how to tackle complexity. Abstractions lets you take a group of nodes and consider them as a single node, allowing you to build super-graphs covering the underlying structure of each part of the program.
> both syntactically and semantically
I do want to cover semantic program complexity at some point as a deeper discussion. I find that side to me to be quite interesting. How to measure it too.
[1]: https://en.wikipedia.org/wiki/Fast_inverse_square_root
While the tools you talk about sound interesting, to me this was more about an in-principle possible measurement rather than something we'd actually carry out.
I think stating that "more stuff" in the program code and in the spec leads to more stuff to keep track of, and so we want to minimize complexity to maintain tractability?
I think that's reasonable :) More stuff is more stuff, no matter how simple/complex the aspects of the code and reasoning for why it is that way.
If this is all so easy and obvious, why do all of the tools/metrics that measure software complexity suck?
Ease of definition doesn't equate ease of measurement..