I usually defer this until a PM does the research to highlight that speed is a burning issue.
I find 98% of the time that users are clamoring to get something implemented or fixed which isnt speed related so I work on that instead.
When I do drill down what I tend to find in the flame graphs is that your scope for making performance improvements a user will actually notice is bottlenecked primarily by I/O not by code efficiency.
Meanwhile my less experienced coworkers will spot a nested loop that will never take more than a couple of milliseconds and demand it be "optimised".
Even at Google, the tendency is (or was when I was there), to only profile things that we know are consuming a lot of resources (or for sure will), or are hurting overall latency.
Also the rule (quote?) says "speed hack", I don't think he is saying ignore runtime complexity totally, just don't go crazy with really complex stuff until you are sure you need it.
That depends on which part of Google. I worked in the hot path of the search queries and there speed was extremely important for everything, they want to do so much there every single query and latency isn't allowed to go up.
The problem with ignoring performance is that you'll always end up with slow software that is awful to use but ticks all the feature boxes. As soon as someone comes along that is fast and nice people will switch to that.
People don't ask for software to be fast and usable because it obviously should be. Why would they ask? They might complain when it's unusably slow. But that doesn't mean they don't want it to be fast.