I am sure this is true. On the flip side, as someone who is addicted to learning, I've been finding LLMs to be amazing at feeding my addiction. :)
Some recent examples:
* foreign languages ("explain the difference between these two words that have the same English translation", "here's a photo of a mock German exam paper and here is my written answer - mark it & show how I could have done better")
* domains that I'm familiar with but might not know the exact commands off the top of my head (troubleshooting some ARP weirdness across a bunch of OSX/Linux/Windows boxes on an Omada network)
* learning basic skills in a new domain ("I'm building this thing out of 4mm mild steel - how do I go about choosing the right type of threading tap?", "what's the difference between Type B and Type F RCCB?")
Many of these can be easily answered with a web search, but the ability to ask follow-up questions has been a game changer.
I'd love to hear from other addicts - are there areas where LLMs have really accelerated your learning?
I agree, I always ask to know more if I don’t get it or it’s a new subject. But I think we’re in the minority, it’s easier to just accept the answer and move on, it requires very little effort compared to trying to understand and retain.
Hah, yesterday I was discussing solar panels and moving shadows. I would have wasted money buying a commercial solar panel if I didn’t have this chat.
Learned a lot on how it works, to the point I’m confident that I can go the DIY route and spend my money in AliExpress buying components instead.
Why not ask a pro solar panel installer instead? I live in an apartment, of course they would say it’s not possible to place a solar panel on my terrace. I don’t believe in things not being possible.
But I had two semesters of electronics/robotics in my CS undergrad and I know to not to trust the LLM blindly and verify.
I'm of a similar mind but I think you also need to be careful. I find that people are more willing to believe a chatbot than a search result simply due to the way the information is presented. But if you're thinking "but search results can be wrong too!" then that's exactly my point. The problem is quite similar to people "doing their own research". I'm sure conspiracy theorists do a lot of reading, a lot of searching, and all that cargo cult research stuff. But I say cargo cult because it has all the form of research but none of the substance. That doesn't mean using LLMs is exclusive cargo cult learning but it is also easy to fall into a trap of that, and I'd argue easier than it is to fall into cargo cult learning by searching, which is easier to fall into cargo cult learning than by reading books, which is easier than being in a university lecture. Doesn't mean the tools are bad, but that it's easy to fool ourselves.
Basically if you can't differentiate how your typical conspiracy theorist isn't researching then you're at greater risk. It's worth thinking about that question, as they do do a lot of reading, thinking, and looking things up. It's more subtle, right?
FWIW, a thing I find LLMs really useful for is learning the vernacular of fields I'm unfamiliar or less familiar with. It is especially helpful when searches fail due to overloaded words (and let's be honest, Google's self elected lobotomy), but it is more a launching point. Though this still has the conspiracy problem as it is easy to self-reinforce a belief and not considering the alternatives. Follow-up questions are nice and can really help sifting through large amounts of information, but they certainly have a preference to narrow the view. I think this makes learning feel faster and more direct but have also taught (at the university level) I think it is important to learn all the boring stuff too. That stuff may not be important "now" but a well organized course means that that stuff is going to be important "soon" and "now" is the best time to learn it. No different than how musicians need to practice boring scales and patterns, athletes need to do drills and not just learn by competing (or "simulated" computations), or how children learn to write by boringly writing shapes over and over. I find the LLMs like to avoid the boring parts.
Just because a calculator will only ever be used by a subset of the population to type 80085 and giggle, doesn't mean it can't also be used for complex calculations.
AI is a tool that can accelerate learning, or severely inhibit it. I do think the tooling is going to continue to make it easier and easier to get good output without knowing what you're doing, though.
> Just because a calculator will only ever be used by a subset of the population
I'm not sure what your argument is here. I think everyone knows this but also recognizes that the vast majority of people are not using calculators in that way. The vast majority of people are using calculators to replace calculation.
I'll give an example. I tell people I tip by: round the decimal, divide by 10, multiply by 2. Nearly every time I say that people tell me it is too difficult. This includes people with PhD STEM educations...
Hearing these stories (and I hear them more than I would like) is mind boggling to me. As someone who’s quite bad at math, doing what you describe is insanely basic stuff, anyone in a developed country with access to school should be able to do that.
It will be hard to convince me those people are using a LLM to learn.
I am sure this is true. On the flip side, as someone who is addicted to learning, I've been finding LLMs to be amazing at feeding my addiction. :)
Some recent examples:
* foreign languages ("explain the difference between these two words that have the same English translation", "here's a photo of a mock German exam paper and here is my written answer - mark it & show how I could have done better")
* domains that I'm familiar with but might not know the exact commands off the top of my head (troubleshooting some ARP weirdness across a bunch of OSX/Linux/Windows boxes on an Omada network)
* learning basic skills in a new domain ("I'm building this thing out of 4mm mild steel - how do I go about choosing the right type of threading tap?", "what's the difference between Type B and Type F RCCB?")
Many of these can be easily answered with a web search, but the ability to ask follow-up questions has been a game changer.
I'd love to hear from other addicts - are there areas where LLMs have really accelerated your learning?
I agree, I always ask to know more if I don’t get it or it’s a new subject. But I think we’re in the minority, it’s easier to just accept the answer and move on, it requires very little effort compared to trying to understand and retain.
Hah, yesterday I was discussing solar panels and moving shadows. I would have wasted money buying a commercial solar panel if I didn’t have this chat.
Learned a lot on how it works, to the point I’m confident that I can go the DIY route and spend my money in AliExpress buying components instead.
Why not ask a pro solar panel installer instead? I live in an apartment, of course they would say it’s not possible to place a solar panel on my terrace. I don’t believe in things not being possible.
But I had two semesters of electronics/robotics in my CS undergrad and I know to not to trust the LLM blindly and verify.
[dead]
I'm of a similar mind but I think you also need to be careful. I find that people are more willing to believe a chatbot than a search result simply due to the way the information is presented. But if you're thinking "but search results can be wrong too!" then that's exactly my point. The problem is quite similar to people "doing their own research". I'm sure conspiracy theorists do a lot of reading, a lot of searching, and all that cargo cult research stuff. But I say cargo cult because it has all the form of research but none of the substance. That doesn't mean using LLMs is exclusive cargo cult learning but it is also easy to fall into a trap of that, and I'd argue easier than it is to fall into cargo cult learning by searching, which is easier to fall into cargo cult learning than by reading books, which is easier than being in a university lecture. Doesn't mean the tools are bad, but that it's easy to fool ourselves.
Basically if you can't differentiate how your typical conspiracy theorist isn't researching then you're at greater risk. It's worth thinking about that question, as they do do a lot of reading, thinking, and looking things up. It's more subtle, right?
FWIW, a thing I find LLMs really useful for is learning the vernacular of fields I'm unfamiliar or less familiar with. It is especially helpful when searches fail due to overloaded words (and let's be honest, Google's self elected lobotomy), but it is more a launching point. Though this still has the conspiracy problem as it is easy to self-reinforce a belief and not considering the alternatives. Follow-up questions are nice and can really help sifting through large amounts of information, but they certainly have a preference to narrow the view. I think this makes learning feel faster and more direct but have also taught (at the university level) I think it is important to learn all the boring stuff too. That stuff may not be important "now" but a well organized course means that that stuff is going to be important "soon" and "now" is the best time to learn it. No different than how musicians need to practice boring scales and patterns, athletes need to do drills and not just learn by competing (or "simulated" computations), or how children learn to write by boringly writing shapes over and over. I find the LLMs like to avoid the boring parts.
Just because a calculator will only ever be used by a subset of the population to type 80085 and giggle, doesn't mean it can't also be used for complex calculations.
AI is a tool that can accelerate learning, or severely inhibit it. I do think the tooling is going to continue to make it easier and easier to get good output without knowing what you're doing, though.
I'll give an example. I tell people I tip by: round the decimal, divide by 10, multiply by 2. Nearly every time I say that people tell me it is too difficult. This includes people with PhD STEM educations...
Hearing these stories (and I hear them more than I would like) is mind boggling to me. As someone who’s quite bad at math, doing what you describe is insanely basic stuff, anyone in a developed country with access to school should be able to do that. It will be hard to convince me those people are using a LLM to learn.
Exactly. I like to say that learning feels like frustration. If I'm right, then LLM's eliminate precisely the thing that is learning.