My current opinion is that AI is just a thing that is going to further amplify our existing tendencies (on a person-to-person basis). I've personally found it to be extremely beneficial in research, learning, and the category of "things that take time because of the stuff you have to mechanically go through and not because of the brainpower involved". I have been able to spend so much more time on things that I feel require more human thinking and on stuff I generally enjoy more. It has been wonderful and I feel like I've been on a rocket ship of personal growth.

I've seen many other people who have essentially become meatspace analogues for AI applications. It's sad to watch this happen while listening to many of the same people complain about how AI will take their jobs, without realizing that they've _given_ AI their jobs by ensuring that they add nothing to the process. I don't really understand yet why people don't see that they are doing this to themselves.

The post was largely about young people who are growing up with these tools and are still at the stage of developing their habits and tendencies. Personally I am glad I learnt programming before LLMs even if it meant the tedious searches on stackoverflow because I didn't feel like I was coming up against a wave of new technology when it came to future job searches. Having done so I can understand and appreciate the intrinsic value of learning these things but western culture is largely about extrinsic values that may lead to the future generation on missing out on learning certain skills.

Personally I am glad I learnt programming before StackOverflow! Precisely because it meant I had to learn to figure things out myself.

I still use StackOverflow and LLMs, but if those things were available when I was learning I would probably not have learnt as much.

I started programming before stack overflow, was never any good (still am not to this day), but I was always scared of asking questions on stack overflow. I felt like there was a certain amount of homework expected when you ask a question and usually by the time I did enough work to post a question, it was moot because I would have solved my problem usually by stringing together two or more stack overflow questions to understand my problem better.

The change with LLM is I can now just ask my hare brained questions first and figure out why it was a stupid question later

Of course you’re supposed to put in work, that’s how you learn. You have to think through your problem, not just look at the back of the math book to copy the answer.

The problem with Stack Overflow is not that it makes you do the work—that’s a good thing—but that it’s too often too pedantic and too inattentive to the question to realise the asker did put in the work, explained the problem well, and the question is not a duplicate. The reason it became such a curmudgeonly place is precisely due to a constant torrent of people treating it like you described it, not putting in the effort.

Oh, yeah.

SO is infamous for treating question-askers badly.

I have used LLMs for some time, and have no intentions of ever going back to SO. I get tired of being insulted.

> The only stupid question is the one you don't ask.

- A poster on my old art teacher's studio wall.

SO wasn't even the first site with this phenomenon. IMO SO is way more cordial than some of the forums from the early 00s. And before that, many irc channels were known for being brutal to people coming for help. I was part of the problem there until I realized there was a problem and decided to change my approach, but it took years to unlearn what I had picked up from other channels ops.

I was quite the troll, in the UseNet days.

One of the reasons that I strive to behave, hereabouts, is that I feel the need to atone.

It can be quite difficult to hold my tongue/keyboard, though. I feel as if it’s good exercise.

Wikipedia does the same to those who contribute

StackOverflow intimidated me for reasons you say. What is it with the power trip that some of these forum mods have?

Basic human nature. Many folks that are hard on others, have been the recipient of bullying, themselves. It’s a self-perpetuating thing.

“Breaking the chain” is quite difficult, because it means not behaving in the manner that every cell in your body demands.

I can't check, but I don't think I've ever asked questions on StackOverflow, or even Reddit. Maybe I'm lucky, but my searches have always given me enough leads to find my own solutions or where to find them. There's a lot of documentations and tips floating around the internet. And for a lot of technologies, the code is available as well (or a debugger).

Maybe those are two sides of the same coin, question-askers are treated harshly because the priority of the site isn't to help them, the priority is to help the people who are searching up similar questions and browsing the threads. It makes perfect sense from a business perspective, because for every question-asker you'll have many more question-browsers.

I always enjoyed documenting things. So got great delight out of carefully asking the few questions I was really stuck on on stack overflow... and then half the time later coming up with a solution and adding that good answer.

(Mostly this ended up being weird Ubuntu things relating to usecases specific to robots... not normal programming stuff)

I originally learned programming by answering questions on StackOverflow. It was (unsurprisingly) quite brutal, but forced me to dive deep into documentation and try everything out to make sure I understood the behavior.

I can’t speak to whether this is a good approach for anyone else (or even for myself ~15 years later) but it served to ingrain in me the habit of questioning everything and poking at things from multiple angles to make sure I had a good mental model.

All that is to say, there is something to be said for answering “stupid” questions yourself (your own or other people’s).

> ... "it served to ingrain in me the habit of questioning everything and poking at things from multiple angles to make sure I had a good mental model."

Way back in "Ye olden days" (Apple ][ era) my first "computer teacher" was a teacher's assistant who had wrangled me (and a few other students) an hour a day each on the school's mostly otherwise un-used Apple ][e. He plopped us down in front of the thing with a stack of manuals, magazines, and floppy discs and let us have at it. "You wanna learn computer programming? You're gonna have to read..." :)

Pretty much this. Even in topics I'm decently knowledgeable, I feel like the vibe from SO answers will be "just read the spec" or source code; so I usually was able to string other answers together.

With LLMs you can start with first principles, confirm basic knowledge (of course, it hallucinates but I find it's not that hard to verify things most of the time) or just get pointers where to dive deeper.

[deleted]

That's interesting. When SO came out, I was a big contributor, all in on getting the karma points and whatnots. Eventually, I realized I was being bullied by a bunch of Europeans who had never actually worked a real job in their entire lives and were, for lack of a better term, "software enthusiasts." Invariably, I would answer something based on actual experience only to get downvoted because some idiot hobbyist thought it was incorrect. The thing about this biz is that there are things that are blatantly incorrect for sure, but all correctness is some shade of gray, highly dependent on particular situations. People who lack experience have no sense of any of that, though.

A second major issue with SO is that answers decay over time. So a good answer back in 2014 is a junk answer today. Thus, I would get drive by downvotes on years old discussions, which is simply irritating.

So I quit SO and never bothered to answer another single question ever again.

SO has suffered from enshitification, and though I despise that term, it does sort of capture how sites like SO went from excellent resources into cesspools of filth and fools.

That LLMs are trained on that garbage is amusing.

I haven't answered on SO since someone edited my answer to say something I didn't write. It was minor, but I don't like that on principle. It adds huge personal risk to every question I answer.

My professor at uni said that people who have learned to search for information before the internet came along are the best at searching for information on the internet. My personal experience agrees and I'm very glad I'm one of such people.

Not to out myself as old, but I learned to program before ChatGPT before stack overflow and before Google. There are some here that are even older. There were paper books with indexes. Indexes!

That way of life is gone for me. I've got a smartphone and I doomscroll to my detriment. What's new and fascinating to me is the models themselves. https://fi-le.net/oss/ currently trending here is the tip of a whole new area of study and work.

I've been programming for 46 years. The first program I wrote was on a piece of notebook paper, in anticipation of getting my first computer a few weeks later (Atari 400).

I haven't even been born for more than half of that time. My first programs were also written on paper. It isn't only age.

> Wasn't that the boxy little thing that had "Cartridge BASIC" on a little brick thing that you plugged in just like Atari videogame cartridges on the Atari 5200 game console?

How has your experience with AI been?

Not great, honestly. It's rather like pulling a lever on a slot machine. And I'm not really into gambling.

Good lord, I'm not glad.

It was horrible. Because it wasn't about "figuring things out for yourself." I mean, if the answer was available in a programming language or library manual, then debugging was easy.

No, the problem was you spent 95% of your debugging time working around bugs and unspecified behavior in the libraries and API's. Bugs in Windows, bugs in DLL's, bugs in everything.

Very frequently something just wouldn't work even though it was supposed to, you'd waste an entire day trying to get the library call to work (what if you called it with less data? what if you used different flags?), and then another day rewriting your code to use a different library call, and praying that worked instead. The amount of time utterly wasted was just massive. You didn't learn anything. You just suffered.

In contrast, today you just search from the problem you're encountering and find StackOverflow answers and GitHub issues describing your exact problem, why it's happening, and what the solution is.

I'm so happy people today don't suffer the way we used to suffer. When I look back, it seems positively masochistic.

>I'm so happy people today don't suffer the way we used to suffer.

TBF Ali bugs in some framework you're using still happens. The problem wasn't eliminated, just moved to the next layer.

Those debugging skills are the most important part of working with legacy software (which is what nearly all industry workers work in). It sucks but is necessary for success in this metric.

My point wasn't that the bugs don't exist anymore. Obviously they still do.

My point is that I can frequently figure out how to work around them in 5 minutes rather than 2 days, because someone else already did that work. I can find out that a different function call is the answer, or a weird flag that doesn't do what the documentation says, or whatever it is.

And my problem of it taking two days to debug something is eliminated, usually.

I don't know that it's that much easier today. I ran into lots of odd issues with Next.js and authentication and there wasn't a whole heck of a lot of useful info out there about it. Github does help, but you have to wait for the devs to reply.

There are tons of obfuscated Java jar libraries out there that are not upgradeable that companies have built mission critical systems around only to find out they can't easily move to JVM 17 or 25 or whatever and they don't like hearing that.

>My point is that I can frequently figure out how to work around them in 5 minutes rather than 2 days

Guess I'm just dumb then. I'm still taking days to work around some esoteric, underdocumented API issues in my day-to-day work.

Guess you missed my use of the words "frequently" and "usually", which I intentionally used instead of "always".

I guess I should have used "frequently" and "usually" when describing that I constantly run into problems relating to API's that cause mini-rabbit holes.

The thing is these API's are probably just as massive as old school OS cosebases, so I'm always tripping into new landmines. I can be doing high level gameplay stuff one week. Then the next week I need to figure out how authoring assets works, and then the next week I'm performing physics queries to manage character state. All in the same API that must span 10s of millions of lines of code at this point.

It's true that the time figuring out the buggy behaviour would now be less, but the buggy API doesn't go away only, because it exists on Github. If you would be able to change the source code now, you would've been able to fix it back then.

All the while having some dolt of a PM asking for status updates.

[deleted]

I also learned to program in the era of Internet and then early SO. I didn't learn to just RTFM until much later, and when I did it was an epiphany.

> Personally I am glad I learnt programming before StackOverflow!

I have not, but at the beginner level you don't really need it, there are tons of tutorials and language documentation that is easier to understand. Also beginners feel absolutely discouraged to ask anything, because even if the question is not a real duplicate, you use all terms wrong and thus get downvoted to hell and then your question is marked as a duplicate to something, that doesn't even answer your question.

Later it's quite nice to ask for clarifications of e.g. the meaning of something specific in a protocol or the behaviour of a particular program. But quite quickly you don't actually get any satisfying answers, so you revert to just read the source code of the actual program and are surprised how easy that actually was. (I mean it's still hard every time you start with a new unknown program, but it's easier than expected.)

Also when you implement a protocol, asking questions on StackOverflow doesn't scale. Not because the time you need to wait for answers; even if that were zero, it still takes to long time and is deeply unsatisfying to develop a holistic enough understanding to write the code. So you start reading the RFCs and quickly appreciate how logically and understandable they are. You first curse how unstructured anything is and then you recognize that the order follows what you need to write and you can just trust the text and write the algorithm down. Then you see that the order in which the protocol is described actually works quite well for async and wonder what the legacy code did, because not deviating from the standard is actually easier.

At some point you don't understand the standard, there will be no answer on StackOverflow, the LLM just agrees with you for every conflicting interpretation you suggest, so you hate everything and start reading other implementations. So no, you still need to figure out a lot for yourself.

I can still remember learning my first "real" programming language (Turbo Pascal[1]) in 1990, entirely out of a book! It took much longer to figure things out entirely on my own when I was stuck on something, but once I figured it out and overcame it I'd spent enough time working through it to gain a thorough understanding of whatever the problem was. Going through this process eventually gave me a much greater understanding of programming as a whole and made it much easier to pick up new languages. I fear this will be lost on a generation growing up with LLMs.

(1)https://archive.org/details/borland-turbo-pascal-6.0-1990

I agree with you on the question of extrinsic values and do not envy people who are starting college right now, trying to make decisions about an extra-ordinarily unclear future. I recently became a father and I try to convince myself that in eighteen years we'll at least finally know whether it has all been hype or not.

However, on the intrinsic value of these new tools when developing habits and skills, I just think about the irreplaceable role that the (early) internet played in my own development and how incredible an LLM would have been. People are still always impressed whenever a precocious high schooler YouTubes his way to an MVP SaaS launch -- I hope and expect the first batch of LLM-accompanied youth to emerge will have set their sights higher.

>However, on the intrinsic value of these new tools when developing habits and skills, I just think about the irreplaceable role that the (early) internet played in my own development and how incredible an LLM would have been.

I don't know about that. Early internet taught me I still need to put in work to to find answers. I still had to read human input (even though I lurked) and realize a lot of information was lies and trolls (if not outright scams). I couldn't just rely on a few sites to tell me everything and had to figure out how to refine my search queries. The early internet was like being thrown into a wilderness in many ways, you pick up survival skills as you go along even if no one teaches you.

I feel an LLM would temper all the curiosity I gained in those times. I wouldn't have the discipline to use an LLM the "right way". Clearly many adults today don't either.

[deleted]

It fulfills steve jobs promise that a computer is a bicycle for your mind. It is crazy to me that all these poeple think they are going to lose their ability to think. If you didn't lose your ability to think to scrolling social media then you aren't going to lose it to AI. However, I think a lot of people lost their ability to think by scrolling social media and that is problematic. What people need to realize is that they have agency over what they put in their mind and they probably shouldn't put massive amounts of algorithmically determined content without first considering if its going to push them in a particular direction of beliefs, purchases, or lifestyle choices.

1. https://www.researchgate.net/publication/255603105_The_Effec...

2. https://pubmed.ncbi.nlm.nih.gov/25509828/

3. https://www.researchgate.net/publication/392560878_Your_Brai...

I’m pretty convinced it should be used to do things humans can’t do instead of things humans can do well with practice. However, I’m also convinced that Capital will always rely on Labor to use it on their behalf.

That’s always been the issue for me. It’s not the technology itself, it’s that virtually the entire initiative is being pushed by venture capital with a “we want to make more money than God” mission that means they call every person calling for caution a Luddite/anti-progress. That’s basically how every thing on the Internet has expanded over the last 20 years and the results have been, if I’m being insanely generous, mixed at best.

I also think that being a little more cautious would have yielded many, if not most of the benefits we've received while avoiding many of the downfalls. Most of these companies knew their products were negatively affecting their users-- e.g. instagram and teen girl self esteem-- but actively suppressed it because it would inhibit making dump trucks full of money off of it. People who stand to make that money will ALWAYS say you're going to ruin everything if you do anything at all to impede progress-- that's ridiculous. The people that use their turn signal and drive within 10mph of the speed limit still reach their destination, and with dramatically less risk than the people that drive pedal-to-the-metal, tailgating, with that self-absorbed fuck-everybody-else attitude.

https://www.youtube.com/watch?v=VEIrQUXm_hY

The number of times I’ve been in an argument with colleagues and said “it’s all tools, use the tool for the job” to close out my point can’t be over overstated lol. So many film vs digital arguments in particular.

Preach. This is exactly what I’m driving at.

I think it's more "We want to use money to become gods", and AI is very much part of that.

They're going to be rather surprised when this doesn't work as planned, for reasons that are both very obvious and not obvious at all. (Yet.)

I don't think that a couple papers counts for much of anything. A couple scientific articles form an opinion about a topic. The evidence shown in those papers are certainly something to think about, but there hasn't been enough time or effort put into the use of understanding how AI technologies aide technical work and the ability to solve more complex problems to make some sort of suggested claim that it is inherently bad.

Compare this body of work to the body of work that has consistently showed social media is bad for you and has done so for many years. You will see a difference. Or if you prefer to focus on something more physical, anthropogenic climate change, the evidence for the standard model of particle physics, the evidence for plate tectonics, etc.

I'm not saying we shouldn't be skeptical that these technologies might make us lazy or inable to perform critical functions of technical work. I think there is a great danger that these technologies essentially fulfill the promise of data science across industries, that is, a completely individualized experience to guide your choices across digital environments. That is not the world that I want to live in. But, I also don't think that my mind is turning to mush because I asked Claude Code to write some code to make a catboost model that would have taken me a few hours to try out some idea.

Time goes forward, in the future when will you be in a situation you can't access a LLM? Better use LLM as much as possible to learn the skills of controlling agents, scaffolding constrains, docs, breaking problems in such a way that AI can solve them. These are the skills that matter now.

We don't practice much using the assembler either, or the slide ruler. I also lost the skill to start an old Renault 12 which I owned 30 years ago, it is a complex process believe me, there were some owners reaching artist level at it.

>in the future when will you be in a situation you can't access a LLM?

In an interview setting, while in a meeting, if you're idling on a problem while traveling or doing other work, while you are in an area with weak reception, if your phone is dead?

There are plenty of situations where my problem solving does not involve being directly at my work station. I figured out a solution to a recent problem while at the doctor's office and after deciding to check the API docs more closely instead of bashing my head on the compiler.

>We don't practice much using the assembler either, or the slide ruler.

Treating your ability to research and critically think as yet another tool is exactly why I'm pessimistic about the discipline of the populace using AI. These aren't skills you use 9-5 then turn off as you head back home.

Apple will most likely inject LLM model into iphone directly. It wont be amazing but it will work for most things.

Sad truth is future will most likely invalidate all „knowledge” beside critical thinking.

> "when will you be in a situation you can't access a LLM...?"

When you're unemployed, homeless, or cash strapped for other reasons, as has happened to more than a few HNers in the current downturn, and can't make your LLM payments.

And that doesn't even account for the potential of inequality, where the well-off can afford premium LLM services but the poor or unemployed can only afford the lowest grades of LLM service.

Exactly. Why should I not learn new things and how they work? What is the point of living if not learning new things?

> I’m pretty convinced it should be used to do things humans can’t do instead of things humans can do well with practice. However, I’m also convinced that Capital will always rely on Labor to use it on their behalf.

When computers were new, they were on occasion referred to as "electronic brains" due to their capacity for arithmetic.

Humans can, with practice, do arithmetic well.

A Raspberry Pi Zero can do arithmetic faster than the combined efforts of every single living human even if we all trained hard and reached the level of the current record holder.

Should we stop using computers to do arithmetic just because we can also do arithmetic, or should we allow ourselves to benefit from the way that "quantity has a quality all of its own"*?

* Like so many quotes, attributed to lots of different people. Funny how unreliable humans can be with information :P

Being a "bicycle for the mind" is a fine thing for technology to be. The problem is just as with bicycles that's too much work for a lot of people and they would prefer "cars for the mind" in which they have to do nothing.

Even cars are too much. Give me a Waymo. Or better yet just let me stay home and doom scroll while my life gets delivered to my doorstep.

Geesh, doorbell again. Last mile problem? C'mon. Whoever solves the last 30 feet problem is the real hero.

> Whoever solves the last 30 feet problem is the real hero.

Paintball gun, except with cream fillings instead of paint, and softer shells so they can be fired straight into people's mouths.

It’s less like a bicycle for the mind, and more like a bus. Sure you’re gonna get there quickly, but you’ll to end up at the same place as a bunch of other people, and you won’t remember the route you took to get there.

> If you didn't lose your ability to think to scrolling social media ...

Didn't we?

> However, I think a lot of people lost their ability to think by scrolling social media and that is problematic.

Specifically, most of the billionaires. They’re all deranged lunatics now, between COVID, social media, and being told they couldn’t say the n-word with impunity for two weeks back in 2020.

They always were. COVID just had them fully mask off (no pun intended). I still remember Musk trying to call those cave rescuers pedophiles because they wouldn't use his plan instead. That was 2018.

I only read two sentences max of any comment since I downloaded TikTok.

Honestly I feel my skills atrophying if I rely on AI too much, and many people I interact with are much weaker still (trying to vibe code without ever learning). To take your analogy further, having a single speed bike lets you go further faster and doesn't have a big impact on your "skills" (physical in this case), but deferring all transport to cars, and then to an electric scooter so you never have to walk definitely will cause your endurance / physical ability to walk to disappear. We are creatures that require constant use of and exercise of our capabilities or the system crumbles. Especially for high-skill activities (language, piano, video games, programming), proficiency can wane extremely quickly without constant practice.

Except that generative "AI" is a tricycle for the mind that prevents you from ever learning how to ride a bicycle.

It’s more like a bus where you can sit there and stare out the window if you like. But somehow it also gives people the illusion that they are driving the bus, giving them a sense of self satisfaction while robbing them of the opportunity to actually create or learn something.

The problem is, that many people are also incapable of adding much to the process. We have had that kind of situation long before "AI". There are tons of gatekeepers and other types of people out there, that are a net negative, wherever they are employed, either by doing bad work, which someone else needs to clean up after them, or by destroying work culture with silly shortsighted dogmatism of how things must work, clueless middle management, and so on. We need to leave these people somewhere. Maybe with "AI" a few more of these are revealed, but the problem stays the same. Where do we leave all these people? What task can we give them, that is not dehumanizing can we give them, where they will be a positive, instead of net negative to society? Or do we leave it all up to chance, that one day they will find something they are actually good at, and that doesn't result in net negative for humanity? What is the future of all the people? UBI and letting them figure it out doesn't look like such a bad idea.

I think what you will find is that many people fundamentally don't care about their job for various reasons, chiefly of which is most likely that they don't feel fairly compensated, and thus outsourcing their labor to AI isn't the fundamental identify transplant you think it is.

> I don't really understand yet why people don't see that they are doing this to themselves.

Maybe it has something to do with the purveyors of these products

- claiming they will take the jobs

- designing them to be habit-forming

- advertising them as digital mentats

- failing to advertise risks of using them