My rule with LLMs has been "if a shitty* answer fast gets you somewhere, the LLMs are the right tool," and that's where I've seen them for learning, too. There are times when I'm reading a paper, and there's a concept mentioned that I don't know - I could either divert onto a full Google search to try to find a reasonable summary, or I can ask ChatGPT and get a quick answer. For load-bearing concepts or knowledge, yes, I need to put the time in to actually research and learn a concept accurately and fully, but for things tangential to my actual current interests or for things I'm just looking at for a hobby, a shitty answer fast is exactly what I want.

I think this is the same thing with vibe coding, AI art, etc. - if you want something good, it's not the right tool for the job. If your alternative is "nothing," and "literally anything at all" will do, man, they're game changers.

* Please don't overindex on "shitty" - "If you don't need something verifiably high-quality"