Pretty much all of the intuition I've picked up about getting good results from LLMs has stayed relevant.

If I was starting from fresh today I expect it would take me months of experimentation to get back to where I am now.

Working thoughtfully with LLMs has also helped me avoid a lot of the junk tips ("Always start with 'you are the greatest world expert in X', offer to tip it, ...") that are floating around out there.

All of the intuition? Definitely not my experience. I have found that optimal prompting differs significantly between models, especially when you look at models that are 6months old or older (the first reasoning model, o1, is less than 8 months old).

Speaking mostly from experience of building automated, dynamic data processing workflows that utilize LLMs:

Things that work with one model, might hurt performance or be useless with another.

Many tricks that used to be necessary in the past are no longer relevant, or only applicable for weaker models.

This isn't me dimissing anyone's experience. It's ok to do things that become obsolete fairly quickly, especially if you derive some value from it. If you try to stay on top of a fast moving field, it's almost inevitable. I would not consider it a waste of time.