> My thesis so far is something like "you should try to write little proofs in your head about your code." But there's actually a secret dual version of this post, which says "you should try to write your code in a form that's easy to write little proofs about."

Easier said than done. It is certainly feasible on greenfield projects where all the code is written by you (recently), and you have a complete mental model of the data layout and code dependencies. It's much harder to prove stuff this way when you call foo(), bar() and baz() across unit boundaries, when they modify global state and are written by different developers.

A part of "being a good developer" is being able to evolve systems in this direction. Real systems are messy, but you can and should be thoughtful about:

1. Progressively reducing the number of holes in your invariants

2. Building them such that there's a pit of success (engineers coming after you are aware of the invariants and "nudged" in the direction of using the pathways that maintain them). Documentation can help here, but how you structure your code also plays a part (and is in my experience the more important factor)

yeah, this is what i was trying to get at with that notion of "proof-affinity"; imo a well-structured codebase is one in which you can easily prove stuff to yourself about code you didn't necessarily write

I think your frustration is illustrative, actually: the reason mutable global state is bad is because in order to prove that a piece of code reading it is correct, you have to know what _the entire rest of the program_ is doing.

If, instead, you make the global variable immutable, make the state variable into a function argument, or even wrap the mutable global state in some kind of helper class, then you only need to know what the callers of certain functions are doing. The visibility of those functions can be limited. Caller behavior can be further constrained with assertions inside the function. All of these (can) make it easier to prove that the reader is correct.

I think most programmers already do this, actually; they just don't think of their decisions this way.

It’s a good direction to follow, but it can only get you so far. Some pieces of code do naturally evolve into a functional formalism, while others are inherently imperative. Usually, the top levels of your program (event loop) are imperative and deal with stateful “devices” like IO, the screen and storage subsystems. The “leaf” functions from the call graph can be functional, but you still can’t reason about the whole program when it is imperative at the top.

Yes, but if you can actually drive all that out to the very top level and leave everything everything else clean, that’s a huge improvement over how many programs are structured.

> It's much harder to prove stuff this way when you call foo(), bar() and baz() across unit boundaries, when they modify global state and are written by different developers.

I think this reinforces the article's point.

Code like this is much more likely to contain bugs and be harder to maintain without introducing more bugs, than programs written from the outset with this goal of "provability".

And in fact, when you look at formally proven code, 80% of the shtick is reducing complexity of mutation to something tractable. That's valuable whether or not you then actually go through the process of formally proving it.

I just finished cave diving in a code base that's had a very old ticket to clean up the mess.

I went in with 3 tickets in mind to fix. I found half a dozen more while I was down there, and created 2 myself. I don't know if I got off easy or I was distracted and missed things.

The other project I'm working on is not dissimilar. Hey did you guys know you have a massive memory leak?

This made me think how I constantly rethink my approach to coding and learning how to do it “right” over and over.

I wonder if someone like a John Carmack is just like… yeah I got it or he also is constantly feeling like he was bad 5 years ago and is doing it “better” now.

This is an important point I like to mention to people. Thinking back to old code you wrote and how you would choose to do it better than you originally did it a sign of growth, and if that stops happening after a certain point, it's a sign you've stopped growing. Given that the best programmers in the world still don't write perfectly bug-free code, I don't have any illusions that there will always be plenty of room for me to continue improving as a programmer, so I consider having those thoughts to be an unequivocally good thing. I'd honestly be a bit wary of anyone who didn't think they needed to learn or grow any more as a programmer!

> when they modify global state and are written by different developers.

Once cancer has metastasized, the treatment plans are more aggressive and less pleasant. The patient can still be saved in many cases, but that depends on a lot of external factors.

Even then, you can get a map of "safe" and "unsafe" locations in the codebase, I.e. you can get a mental model of which functions modify global state and which don't.