Another thought: many (most?) of these "rules" were before widespread distributed computing. I don't think Knuth had in mind a loop that is reading from a database at 100ms each time.

I've seen people write some really head shaking code that makes remote calls in a loop that don't actually depend on each other. I wonder to what extend they are thinking "don't bother with optimization / speed for now"

First, I agree with what you're saying.

But second, I'd remove "optimization" from considering here. The code you're describing isn't slow, it's bad code that also happens to be slow. Don't write bad code, ever, if you can knowingly avoid it.

It's OK to write good, clear, slow code when correctness and understandability is more important that optimizing that particular bit. It's not OK to write boneheaded code.

(Exception: After you've written the working program, it turns out that you have all the information to make the query once in one part of the broader program, but don't have all the information to make it a second time until flow reaches another, decoupled part of the program. It may be the lesser evil to do that than rearrange the entire thing to pass all the necessary state around, although you're making a deal with the devil and pinky swearing never to add a 3rd call, then a 4th, then a 5th, then...)

If you really have a loop that is reading from a database at 100ms each time, that's not because of not having optimized it prematurely, that's just stupid.

Reminds me of this quote which I recently found and like:

> look, I'm sorry, but the rule is simple: if you made something 2x faster, you might have done something smart if you made something 100x faster, you definitely just stopped doing something stupid

https://x.com/rygorous/status/1271296834439282690

Got it. What about initiating a 800mb image on a CPU limited virtual machine that THEN hits a database, before responding to a user request on a 300ms roundtrip? I think we need a new word to describe the average experience, stupidity doesn't fit.

And yet... :)

I think there is just a current (I've seen it mostly in Jr engineers) that you should just ignore any aspect of performance until "later"

and, I guess, context does matter. If you need to make 10 calls to gather up some info to generate something, but you only need to do this once a day, or hour, and if the whole process takes a few seconds that's fine, I could see the argument that just doing the calls one at a time linearly is simpler write/read/maintain.