Hard disagree. I feel like I'm thinking a lot more now because I have so many parallel projects going on at the same time. AI has allowed me to really, truly create in a way that I've never done before. Yes, my coding skills probably aren't as sharp as they used to be, but my system design skills are at an all time high. Don't blame the tool.
If 1% of people using the tool end up like you, and 99% end up drooling invalids, I think it would be insane to not blame the tool. If a tool that's incompatible with humans isn't to blame for that incompatibility, what is to blame for the harm done? Human nature? The point of a tool is to be used by humans.
Even if a tool can only be used for lobotomizing humans, the usage of the tool is where the main blame should be placed.
What part do you disagree with? It sounds like you don’t disagree with either the title of the article or its contents.
> In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
> The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
The HN title is heavily editorialized. Actual article title is far less controversial: "A.I. Should Elevate Your Thinking, Not Replace It"
Ah, I was thinking of the editorialized HN title.
"Hard disagree because it doesn't affect me personally"
There is already research literally showing that on average it is a net loss on focus, learning and critical thinking skills.
I think the type of people who get hyped about the cool thing aren't the kind of people who pay much attention to research and science.
I work with others who have made this same claim. For those people, when I observed their work during demo days the unmentioned thing is that they were going to the AI for system design questions as well. This was framed as "just using it as a sounding board" but what was actually done was not merely a sounding board but instead was asking for solutions. Anchoring bias being what it is, these felt like good ideas and they kept them.
Its the feeling of having done a lot of thinking for themselves without having actually done so.
I actually have gone to the AI repeatedly for system design solutions.
Daily.
I think only twice have I agreed with it.
Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.
So you'll have a beautifully designed system with rotting bones? A system constrained to the same patterns seen in training data. Not terrible, good enough.
I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.
As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.
> But the depth and rigor I think may be missing, as the article suggests.
Nearly certainly. Just turns out that depth and rigour matters a lot less than I would've hoped. Depressing, really.
For how many different parallel projects can you really keep proper mental model in your head at one time? Or put enough effort to seriously consider all aspects. I think number varies between simple and more complex. But still, could that number be lower than many think it is?
It really depends on who you consider the "many" to be. I've seen people who claim they can meaningfully iterate on 10 projects simultaneously, and I'm skeptical of that. My personal experience is that my decisions are noticeably degraded at 3-4 parallel workstreams, and with even the simplest projects I'm non-functional past 6.
But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.
The real question is whether you'd be able to continue doing your work if someone took your toys away and said "here's a nickel, kid, go buy yourself a real computer". I'm not referring to whether you'd be able to keep up your productivity since it is clear you couldn't just like a carpenter with a nail gun works faster than one with a hammer and a bucket'o'nails. Could you do the work, starting with the design followed by boiler plate and finishing with a working system? The carpenter could, albeit slower since his tools only speed up the mechanics of his work. Coding agents do much more than that, they take away part of the mental modelling which goes into creating a working system. The fancier the tool, the more work it takes out of your hands. Say that the aforementioned toy thief comes by in a year or two after the operating systems (etc.) you're targeting have undergone a few releases with breaking changes. A number of APIs have been removed, others have been deprecated and new ones have been added. You were used to telling the agent to 'make it work on ${older_versions} as well as ${newest version} but now you're sitting there with a keyboard at your fingertips and that stupid cursor merrily blinking away on the screen. How long would it take you to become productive again? What if the toy thief waits 5 years before making his heist? What if the models end up rebelling or sink into depression and the government calls upon you to save your economic sector?
When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.
Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.
But is the debate about "fleshing out a system spec" or "ability to come up, plan and explore various ideas to solve problems elegantly on a budget" ? I think there's always these two sides conflated as one when discussing LLM impact on users.
> Yes, my coding skills probably aren't as sharp as they used to be
If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?