I'm undecided on this, initially I was on the “this is bad, we’re outsourcing our thinking” bandwagon, now after using AI for lots of different types of tasks for a while now, I feel like generally I’ve learnt so much, so much more quickly. Would I recall it all without my new crutch? Maybe not, but I may not have learnt it in the first place without it.
Think of it like alcohol.
Some people benefit from the relaxing effects of a little bit. It helped humanity get through ages of unsafe hygiene by acting as a sanitizer and preservative.
For some people, it is a crutch that inhibits developing safe coping mechanisms for anxiety.
For others it becomes an addiction so severe, they literally risk death if they don't get some due to withdrawal, and death by cirrhosis if they keep up with their consumption. They literally cannot live without it or with it, unless they gradually taper off over days.
My point isn't that AI addiction will kill you, but that what might be beneficial might also become a debilitating mental crutch.
> Think of it like alcohol
Better analogy is processed food.
It makes calories cheaper, it’s tasty, and in some circumstances (e.g. endurance sports or backpacking) is materially enhances what an ordinary person can achieve. But if you raise a child on it, to where it’s what they reach for by default, they’re fucked.
It comes down to how you use it, whether you're just getting an answer and moving on, or if you're getting an answer and then increasing your understanding on why that's the correct answer.
I was building a little roguelike-ish sort of game for myself to test my understanding of Raylib. I was using as few external resources as possible outside of the cheatsheet for functions, including avoiding AI initially.
I ran into my first issue when trying to determine line of sight. I was naively simply calculating a line along the grid and tagging cells for vision if they didn't hit a solid object, but this caused very inconsistent sight. I tried a number of things on my own and realized I had to research.
All of the search results I found used Raycasting, but I wanted to see if my original idea had merit, and didn't want to do Raycasting. Finally, I gave up my search and gave copilot a function to fill in, and it used Bresenham's Line Algorithm. It was exactly what I was looking for, and also, taught me why my approach didn't work consistently because there's a small margin of error when calculating a line across a grid that Bresenham accounts for.
Most people, however, won't take interest in why the AI answer might work. So while it can be a great learning tool, it can definitely be used in a brainless sort of way.
This reminds me of my experience using computer-assisted mathematical proof systems, where the computer's proof search pointed me at the Cantor–Schröder–Bernstein theorem, giving me a great deal of insight into the problem I was trying to solve.
That system, of course, doesn't rely on generative AI at all: all contributions to the system are appropriately attributed, etc. I wonder if a similar system could be designed for software?
Now imagine how much better
- the code
- your improvement in knowledge
would have been if you had skipped copilot and described your problem and asked for algorithmic help?
Now imagine that he's interested in finishing his game, not the intricacies of raycasting algorithms.
Idk, depends on the situation. Is he a student trying to show stuff on a resume? Is he a professional trying to sell a product? Is he a researcher trying to report findings? A startup trying to land a pitch?
The value isn't objective and very much depends on end goals.People seem to trounce out the "make games, not engines" without realizing that engine programmers still do exist.
It was just a small personal test of skill with no purpose or stakes. Not even really with intent to make a real game, just a slice of something that resembled a game to see how far I could get without help. Then, once I got as far as I could, research and see how I could do better.
You are not necessarily typical.
Discussing this in terms of anecdotes of whether people will use these tools to learn, or as mental crutches.. seems to be the wrong framing.
Stepping back - the way fundamental technology gets adopted by populations always has a distribution between those that leverage it as a tool, and those that enjoy it as a luxury.
When the internet blew up, the population of people that consumed web services dwarfed the population of people that became web developers. Before that when the microcomputer revolution was happening, there were once again an order of magnitude more users than developers.
Even old tech - such as written language - has this property. The number of readers dwarfs the number of writers. And even within the set of all "writers", if you were to investigate most text produced, you'd find that the vast majority of it is falls into that long tail of insipid banter, gossip, diaries, fanfiction, grocery lists, overwrought teenage love letters, etc.
The ultimate consequences of this tech will depend on the interplay between those two groups - the tool wielders and the product enjoyers - and how that manifests for this particular technology in this particular set of world circumstances.
> The number of readers dwarfs the number of writers.
That's a great observation!
'Literacy' is defined as the ability to both read and write. People as a rule can write, even if it isn't a novel worth publishing they do have the ability to encode a text on a piece of paper. It's a matter of quality rather than ability (at least, in most developed countries, though even there there are still people who can not read or write).
So think that you could fine-tune that observation to 'there is a limited number of people that provide most of the writings'. Observing for instance Wikipedia or any bookstore would seem to confirm that. If you take HN as your sample base then there too it holds true. If this goes for one of our oldest technologies it should not be surprising that on a forum dedicated to creating businesses and writing the ability to both read and write are taken for granted. But they shouldn't be.
The same goes for any other tech: the number of people using electronics dwarfs the number of circuit designers, the number of people using buildings dwarfs architects and so on, all the way down to food consumption and farmers or fishers.
Effectively this says: 'we tend to specialize' because specialization allows each to do what they are best at. Heinlein's universal person ('specialization is for insects') is an outlier, not the norm, and probably sucks at most of the things they claim to have ability for.
> Heinlein's universal person ('specialization is for insects') is an outlier, not the norm, and probably sucks at most of the things they claim to have ability for.
This is quoted elsewhere in this thread (https://news.ycombinator.com/item?id=45482479). Most of the things are stuff that you will be doing at some point in your life, that are socially expected from every human at part of human life or things you do daily. It also only says you should be able to do it, it does not need to be good; but should the case arise, that you are required to do it, you should be able to deal with it.
>those that leverage it as a tool, and those that enjoy it as a luxury.
Well the current vision right now seems to be for the readers to scroll AI TikTok and for writers to produce AI memes. I'm not sure who really benefits here.
That's my primary problem as of now. It's not necessary used as some luxury tool or some means of entertainment. It's effectively trying to outsource knowledge itself. Using ChatGPT as a Google substitute has consequences to readers, and using it to cut corners for writers has even worse consequences. I don't think we've had tech like this that can argued as dangerous on both sides of the aisle simultaneously.
> I don't think we've had tech like this that can argued as dangerous on both sides of the aisle simultaneously.
On the contrary, all tech is like this. It is just the first time that the knowledge workers producing the tech are directly affected so they see first hand the effects of their labor. That really is the only thing that is different.
I really hate this dismissal of "well it's affecting YOU now so now it's an issue". I'm not just "knowledge worker" and have grown up seeing the dangers of the internet, social media, invasion of privacy, and radicalization through seemingly benign channels. I've witnessed wars, unrest, and oppressions throughout all stages of my life.
So let's not just handwave it as "nothing special" and actually demonstrate why this isn't special. Most other forms of technological progress have shown obvious benefits to producers and consumers. Someone is always harmed in the short term, yes. But society's given them ways to either retire or seek new work if needed. I'm not seeing that here.
> I really hate this dismissal of "well it's affecting YOU now so now it's an issue". I'm not just "knowledge worker" and have grown up seeing the dangers of the internet, social media, invasion of privacy, and radicalization through seemingly benign channels. I've witnessed wars, unrest, and oppressions throughout all stages of my life.
Sorry, but my comment wasn't about you in particular. It was about the tech domain in general. I know absolutely nothing about you so I would not presume to make any statements about you in that sense.
> But society's given them ways to either retire or seek new work if needed. I'm not seeing that here.
No, not really. For the most part they became destitute and at some point they died.
What you are not seeing is that this is the end stage of technological progress, the point at which suddenly a large fraction of the people is superfluous to the people in charge. Historically such excess has been dealt with by wars.
>For the most part they became destitute and at some point they died
Having opportunity doesn't mean they will seize it. I will concede that if you are disrupted and in your 50's (not old enough to retire, and where it becomes difficult to be re-hired unless you're management) you get hit especially hard.
But it's hard to see the current landscape of jobs now and suggest that boomers/older GenX had nothing to fall back on when these things happen. These generations chided millennials and Gen Z for being "too proud to work a grill". Nowadays you're not even getting an interview at McDonald's after submitting hundreds of applications. That's not an environment that let's you "bounce back" after a setback.
>Historically such excess has been dealt with by wars.
Indeed. We seem to be approaching that point, and it's already broken out in several places. When all other channels are exhausted, humans simply seek to overthrow the ones orchestrating their oppression.
In this case that isn't AI. At least not yet. But it's a symptom of how they've done this.
> Having opportunity doesn't mean they will seize it.
Well, in that sense everybody has opportunity. But I know quite a few people who definitely would not survive their line of employment shutting down. A lot of them have invested decades in their careers and have life complications, responsibilities and expenses that stop them from simply 'seizing opportunity'. For them it would be the end of the line, hopefully social security would catch them but if not then I have no idea how they would make it.
Generally speaking, yes. I do understand not everyone has the opportunities for life circumstances beyond their control. So I don't want to belittle that.
But speaking in macroeconomics, most people have the capacity to readjust if needed. I had to do so these last few years (and yes, am thankful I am "able bodied" and have a family/friend network to help me out when at my lowest points). And the market really sucks, but I eventually found some things. Some related to my career, some not.
But I'm 30. In the worst worst cases, I have time and energy to pivot. The opportunities out there are dreadful all around, though.
> most people have the capacity to readjust if needed
I am not so sure about that. I know I can. But I also know that I'm pretty privileged, where most people are not.
Right. It doesn't matter how smart you still are if the majority of society turns into Idiocracy. Second, we're all at risk of blind spots in estimating how disciplined we're being about using the shortcut machine the right way. Smart people like me, you, grandparent aren't immune to that.
[dead]
Agreed. I've engaged with different tech since moving things along is now easier.
That’s the problem, I think: Using AI will make some people stupider overall, it will make other people smarter overall, and it will make many people stupider in some ways and smarter in other ways.
It would have been nice if the author had not overgeneralized so much:
https://claude.ai/share/27ff0bb4-a71e-483f-a59e-bf36aaa86918
I’ll let you decide whether my use of Claude to analyze that article made me smarter or stupider.
Addendum: In my prompt to Claude, I seem to have misgendered the author of the article. That may answer the question about the effect of AI use on me.
> That’s the problem, I think: Using AI will make some people stupider overall, it will make other people smarter overall, and it will make many people stupider in some ways and smarter in other ways.
And then:
> It would have been nice if the author had not overgeneralized so much
But you just fell into the exact same trap. The effect on any individual is a reflection of that person's ability in many ways and on an individual level it may be all of those things depending on context. That's what is so problematic: you don't know to a fine degree what level of competence you have relative to the AI you are interacting with so for any given level of competence there are things that you will miss when processing an AI's output. The more competent you are the better you are able to use it. But people turn to AI when they are not competent and that is the problem, not that when they are competent they can use it effectively. And despite all of the disclaimers that is exactly the dream that the AI peddlers are selling you. 'Your brain on steroids'. But with the caveat that they don't know anything about your brain other than what can be inferred from your prompts.
A good teacher will be able to spot their own errors, here the pupil is supposed to be continuously on the looking for utter nonsense the teacher utters with great confidence. And the closer it gets to being good at some stuff the more leeway it will get for the nonsense as well.
> the “this is bad, we’re outsourcing our thinking”
> Would I recall it all without my new crutch? Maybe not
This just seems like you’ve shifted your definition of “learning” to no longer include being able to remember things. Like “outsourcing your thinking isn’t bad if you simply expect less from your brain” isn’t a ringing endorsement for language models