Or at least, aware that this argument continues to be made with tenuous evidence and anecdotes. And yet, people are being more productive (actually productive) with AI. Release schedules are increasing, bugs are getting fixed faster, security issues identified and patched sooner, so on and so forth.
I’m not denying (at all) that unused skills languish. I take issue with AI being characterized as a magic eraser that mystically makes people forget what they have already learned. I’ve just done a study and concluded that dogs gets dumber when I throw a ball. What’s my evidence? They stop staring at me to chase it. The ball definitely made them forget who I was, so we shouldn’t allow dogs to have balls anymore.
Can AI make developers lazy in new ways? Of course! Why wouldn’t it? I don’t write things in ASM because I can be “lazy” and write 50x more useful instructions with a few lines of a modern language. I doubt I’d be able to write working ASM anymore without a serious refresher. Did newer languages erase my memory of ASM and make me “lazy”, or did my efforts evolve to make use of the newest technology regardless of “lost” skills?
Can AI make developers lazy in new ways? Of course! Why wouldn’t it? I don’t write things in ASM because I can be “lazy” and write 50x more useful instructions with a few lines of a modern language. I doubt I’d be able to write working ASM anymore without a serious refresher. Did newer languages erase my memory of ASM and make me “lazy”, or did my efforts evolve to make use of the newest technology regardless of “lost” skills?
I would argue that's a misuse of AI. If the point of an engineer is to know how things work behind a piece of software, then shipping code without an understanding how it all works is a failure.
You wouldn't trust an engineer a bridge that an engineer vibe-engineered would you?
So instead of focusing on AI as a productivity tool, focus on AI as a means of adding rigor and understanding to your workflow.
> You wouldn't trust an engineer a bridge that an engineer vibe-engineered would you?
If it was as easy to stress test/battery test/materials test/etc a bridge as it is to test code - then yes. I'd trust an engineer who vibe-engineered a bridge.
---
The problem with mapping digital problems into meat-space is that there is inherently a few orders of magnitude of cost automatically added to anything that happens in meat-space.
I can spin up an arbitrary number (10, 10k, 500k) docker instances, X with fuzzed inputs, Y with explicit edge cases, Z with tolerance testing, etc etc. And if that doesn't work - I can fix and push a button and it just happens again.
If a bridge engineer could do that with bridges - yes I'd expect them to be vibing just as hard as we are now.
> this argument continues to be made with tenuous evidence and anecdotes.
The linked Wikipedia page has plenty of evidence and studies and you can find plenty more with a basic web search. This is not something someone just made up; if you don’t know there are a multitude of studies on the harms of social media, you haven’t looked at all. Which is fine, it’s our prerogative to not search for information, but don’t turn around and say it doesn’t exist or is anecdotal.
> And yet, people are being more productive (actually productive) with AI.
You said, ironically without providing evidence, in the same paragraph that you complained about evidence not being provided for something else which has plenty of it. Furthermore, there are several studies suggesting AI may in fact decrease productivity, but I’m not going to link to those because the more important point is AI has nothing to do with the conversation. The original poster mentioned AI, but this branched thread is exclusively about the “liking to learn” part.
> And yet, people are being more productive (actually productive) with AI. Release schedules are increasing, bugs are getting fixed faster, security issues identified and patched sooner, so on and so forth.
I didn't see anything in parent chain that implied this. Nor did I see it "characterized as a magic eraser"; I saw it framed as something that impedes learning, and that was tied back to constant simulation.
> Or at least, aware that this argument continues to be made with tenuous evidence and anecdotes
The arguments I read and the argument you seem to be replying to seem to be different things.