> When was the last time you saw someone profile their code?

A year ago. I heavily relied on one to optimize a complex data import that took an hour for a million line Excel file. The algorithm translated it to a graph according to a user-specified definition and would update an existing graph in neo4j, keeping the whole thing consistent.

The only other guy who understood the algorithm (a math PhD) thought it was as optimal as it could get. I used the profiler to find all the bottlenecks, which were all DB checks for the existence of nodes, and implemented custom indices to reduce import time from an hour to 3 minutes.

It did introduce a bunch of bugs that I had to fix, but I also discovered some bugs in the original algorithm.

It was one of my best programming experiences ever. Especially the payoff at the end when it went down from an hour to 3 minutes is a dopamine rush like never before. Now I want to optimize more code.

I don't think users cared, though; originally this work would take days by hand, so an hour was already pretty good. Now I made something fiendishly complex look trivial.

  > from an hour to 3 minutes
I sure bet that the users cared. Yeah, starting from a few days an hour feels great but you also get accustomed to it.

  > It did introduce a bunch of bugs that I had to fix, but I also discovered some bugs in the original algorithm.
I find this is extremely common when I profile code. It is just so easy to miss bugs. People get lulled into a false sense of security because tests pass but test just aren't enough. But for some reason when I say "tests aren't enough" people hear "I don't write tests."

Seeing those big improvements and knowing you did more than make it faster is always really rewarding. I hope you do do more optimization :) Just remember Knuth's advice. Because IO is a common problem and Big O isn't going to tell you about that one haha

Yeah, I first want to know there's an actual performance issue to fix. That's basically what Knuth said, and that's what I live by.

> People get lulled into a false sense of security because tests pass but test just aren't enough.

Users weren't using a particular feature because they said they didn't understand it. So we explained it, again and again. Turns out that feature was incredibly buggy and basically worked the way we claimed it did, only when it was used in the specific configuration we tested for. Add another node somewhere and weird stuff starts happening.

The tests looked good, and code coverage was great, but the fact that the tests run through all the branches of the code doesn't mean you're really testing for all behaviour. So I added tests for all configurations I could think of. I think that revealed another bug.

So look at the actual behaviour you need to test, not merely the code and branch coverage.

  > Yeah, I first want to know there's an actual performance issue to fix.
Honestly, I think profilers and debuggers can really help with this too.

  > So I added tests for all configurations I could think of. 
I think that's the key part. You can only test what you know or expect. So your tests can only be complete if you're omniscient.