I think my issue with this generalization is assuming the code itself is where complexity is measured and applied.
For example, the Quake Fast Inverse Square Root[1] takes into account nuances in how floating point numbers can be manipulated. The individual operations/actions the code takes (type casts, bit shifts, etc.) are simple enough, but understanding how it all comes together is where the complexity lies, vs just looking at the graph of operations that makes up the code.
Tools like Rubocop for Ruby take an approach like you mention, measuring cyclomatic and branch complexity in your code to determine a mathematical measurement of the complexity of that code. Determining how useful this is, is another conversation I think. I usually find enforcing rules around that code complexity measurement against your code to be subjective.
Going back to the article, the visualization of with vs without abstractions can cover aggregating the mathematical representation of the code and how to tackle complexity. Abstractions lets you take a group of nodes and consider them as a single node, allowing you to build super-graphs covering the underlying structure of each part of the program.
> both syntactically and semantically
I do want to cover semantic program complexity at some point as a deeper discussion. I find that side to me to be quite interesting. How to measure it too.
While the tools you talk about sound interesting, to me this was more about an in-principle possible measurement rather than something we'd actually carry out.
I think stating that "more stuff" in the program code and in the spec leads to more stuff to keep track of, and so we want to minimize complexity to maintain tractability?
I think that's reasonable :) More stuff is more stuff, no matter how simple/complex the aspects of the code and reasoning for why it is that way.