Anecdotally, the person on my team who is producing the most output and the highest quality happens to explicitly shun LLMs and is also relatively young. Other team members embracing AI spend their day being misled by chat bots, trying to get their agents to work properly by toying with contexts (just needs a little more knowledge!), and producing verbose code that's hard to review and with obvious bugs when you actually think about what it's doing. My favorite is how everything is excessively commented but more than once I've caught the code not matching what the comment said it would do!
FWIW I find it useful if I know exactly what I want and it's quicker to prompt it than type it myself. Also for research and building understanding it's generally good. I still catch it being wrong on details of you're really paying attention or literally contradicting itself between prompts. That gives me a lot of pause about trusting things it told me that I just accepted as fact without having enough knowledge myself to question it.