All of the intuition? Definitely not my experience. I have found that optimal prompting differs significantly between models, especially when you look at models that are 6months old or older (the first reasoning model, o1, is less than 8 months old).

Speaking mostly from experience of building automated, dynamic data processing workflows that utilize LLMs:

Things that work with one model, might hurt performance or be useless with another.

Many tricks that used to be necessary in the past are no longer relevant, or only applicable for weaker models.

This isn't me dimissing anyone's experience. It's ok to do things that become obsolete fairly quickly, especially if you derive some value from it. If you try to stay on top of a fast moving field, it's almost inevitable. I would not consider it a waste of time.