> and it was wrong sometimes, sure. A known limitation.
But that's the entire problem and I don't understand why it's just put aside like that. LLMs are wrong sometimes, and they often just don't give you the details and, in my opinion, knowing about certain details and traps of a language is very very important, if you plan on doing more with it than just having fun. Now someone will come around the corner and say 'but but but it gives you the details if you explicitly ask for them'. Yes, of course, but you just don't know where important details are hidden, if you are just learning about it. Studying is hard and it takes perseverance. Most textbooks will tell you the same things, but they all still differ and every author usually has a few distinct details they highlight and these are the important bits that you just won't get with an LLM
It's not my experience that there are missing pieces as compared to anything else.
Nobody can write an exhaustive tome and explore every feature, use, problem, and pitfall of Python, for example. Every text on the topic will omit something.
It's hardly a criticism. I don't want exhaustive.
The llm taught me what I asked it to teach me. That's what I hope it will do, not try to caution me about everything I could do wrong with a language. That list might be infinite.
> It's not my experience that there are missing pieces as compared to anything else.
How can you know this when you are learning something? It seems like a confirmation bias to even have this opinion?
I'd gently point out we're 4 questions into "what about if you went about it stupidly and actually learned nothing?"
It's entirely possible they learned nothing and they're missing huge parts.
But we're sort of at the point where in order to ignore their self-reported experience, we're asking philosophical questions that amount to "how can you know you know if you don't know what you don't know and definitely don't know everything?"
More existentialism than interlocution.
If we decide our interlocutor can't be relied upon, what is discussion?
Would we have the same question if they said they did it from a book?
If they did do it from a book, how would we know if the book they read was missing something that we thought was crucial?
I didn't think that was what was being discussed.
I was attempting to imply that with high-quality literature, it is often reviewed by humans who have some sort of knowledge about a particular topic or are willing to cross reference it with existing literature. The reader often does this as well.
For low-effort literature, this is often not the case, and can lead to things like https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect where a trained observer can point out that something is wrong, but an untrained observer cannot perceive what is incorrect.
IMO, this is adjacent to what human agents interacting with language models experience often. It isn't wrong about everything, but the nuance is enough to introduce some poor underlying thought patterns while learning.
That's easy. It's due to a psychological concept called: transfer of learning [0].
Perhaps the most famous example of this is Warren Buffet. For years Buffet missed out on returns from the tech industry [1] because he avoided investing in tech company stocks due to Berkshire's long standing philosophy to never invest in companies whose business model he doesn't understand.
His light bulb moment came when he used his understanding of a business he understood really well i.e. their furniture business [3] to value Apple as a consumer company rather than as a tech company leading to a $1bn position in Apple in 2016 [2].
[0] https://en.wikipedia.org/wiki/Transfer_of_learning
[1] https://news.ycombinator.com/item?id=33612228
[2] https://www.theguardian.com/technology/2016/may/16/warren-bu...
[3] https://www.cnbc.com/2017/05/08/billionaire-investor-warren-...
You are right and that's my point. To me it just feels like that too many people think LLMs are the holy grail for learning. No, you still have to study a lot. Yes, it can be easier than it was.
Your other responses kinda imply that you believe LLMs are not good for learning.
That's totally different than saying they are not flawless but they make learning easier than other methods, like you did in this comment