I feel like this is partially a skill issue - You can get direct, cited information from LLMs. There's a level of personal responsibility for over-using the tools and letting them feed you bad/false information, but if you try researching specific abstractions, newer documentation, most LLMS now correctly call and research the tools available, directly citing them.

I think you can build a very easy workflow that reinforces rather than replaces learning, I've used a citation flow to link and put into practice a ton of more advanced programming techniques, that I found incredibly difficult to locate and research before AI.

I'd say the comparison is faulty, it's more akin to swimming to an island (no-ai) vs using a boat. You control the speed and direction of the boat, which also means you have the responsbility of directing it to the correct location.

The analogy was about the unknown thinnest of the ice, not just the fastest way to get there. It's specifically about the lack of reliability of the process.

Yes, I was disagreeing with the premise of the analogy - what would the slow boat in this case be? As my experience, going through software engineering before AI, is that you'd get lost to the ice, with nobody to really help you get out.

If you get lost on the ice and you have someone who confidently tells you the path but is sometimes wrong, is it actually helpful?

PS: sorry if the analogy is a bit wonky but it's quite dear to me as I do ice skating on frozen lakes and it's basically a life or death information "game" that I can relate to. It might not be a great analogy for others.

Haha it's a good analogy, i'm being a little bit argumentative for the sake of it potentially.

I guess in my view - the main alternative you'd have beforehand is just to drown.

For me, AI sits in a space where if you know how to use it, it can tell you all the thin spots of the ice accurately. You can then verify those spots, but there's a level of personal responsibility of verification.

I'd agree there's currently a ton of people that are using these tools to essentially just find the specific route - but i'd argue those people probably shouldn't be skating in the first place, and would've fallen one way or the other.

> AI sits in a space where if you know how to use it, it can tell you all the thin spots of the ice accurately. You can then verify those spots, but there's a level of personal responsibility of verification.

Right, but AFAICT most people just venture over the ice and don't bother to check. In fact a lot of people venture there, do check once or twice, then check less and less frequently. The fact that you do it is great but others seem a lot less careful, until cracks start to show and then it might be too late.

Very true - I won't dispute it!

I'd only argue that people were doing this before AI, slop development was just copy pasting from the first stack overflow issue that matched the question rather than thinking

So i'd argue there's a part of it that is just personal responsibility with how these tools are used

> I guess in my view - the main alternative you'd have beforehand is just to drown.

Before most who didn't know the ice didn't went out on it, today a lot of people who shouldn't be there go far out on the ice.

Totally true - althuogh I feel like that's been the case since the first coding bootcamps