Sure, humans aren't without flaws in this area. However, in real time, humans can learn and correct themselves, we can check eachother, ask for input, etc, and not continue to make mistakes. This isn't the case with LLMs as a service.

For example, even if you craft the most detailed cursor rules, hooks, whatever, they will still repeatedly fuck up. They can't even follow a style guide. They can be informed, but not corrected.

Those are coding errors, and the general "hiccups" that these models experience all the time are on another level. The hallucinations, sycophancy, reward hacking, etc can be hilariously inept.

IMO, that should inform you enough to not trust these services (as they exist today) in explaining concepts to you that you have no idea about.

If you are so certain you are okay to trust these things, you should evaluate every assertion it makes for, say, 40 hours of use, and count the error rate. I would say it is above 30%, in my experience of using language models day to day. And that is with applied tasks they are considered "good" at.

If you are okay with learning new topics where even 10% of the instruction is wrong, have fun.