People have measurably lower levels of ownership and understanding of AI generated code. The people using GenAI reap a major time and cognitive effort savings, but the task of verification is shifted to the maintainer.

In essence, we get the output without the matching mental structures being developed in humans.

This is great if you have nothing left to learn, its not that great if you are a newbie, or have low confidence in your skill.

> LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.

> https://arxiv.org/abs/2506.08872

> https://www.media.mit.edu/publications/your-brain-on-chatgpt...

While I agree with this intuitively, I also just can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran etc.

There is a massive difference in outright transformation of something you created yourself vs a collage of snippets + some sauce based on stuff you did not write yourself. If all you did to use your AI was to train it exclusively on your own work product create during your lifetime I would have absolutely no problem with it, in fact in that case I would love to see copyright extended to the author.

But in the present case the authorship is just removed by shredding the library and then piecing back together the sentences. The fact that under some circumstances AIs will happily reproduce code that was in the training data is proof positive they are to some degree lossy compressors. The more generic something is ("for (i=0;i<MAXVAL;i++) {") the lower the claim for copyright infringement. But higher level constructs past a couple of lines that are unique in the training set that are reproduced in the output modulo some name changes and/or language changes should count as automatic transformation (and hence infringing or creating a derivative work).

The study compares ChatGPT use, search engine use, and no tool use.

The issues with moving from ASM to C/Fortran are different from using LLMs.

LLMs are automation, and general purpose automation at that. The Ironies of Automation came out in the 1980s, and we’ve known there are issues. Like Vigilance decrement that comes when you switch from operating a system to monitoring a system for rare errors.

On top of that, previous systems were largely deterministic, you didn’t have to worry that the instrumentation was going to invent new numbers on the dial.

So now automation will go from flight decks and assembly lines, to mom and pop stores. Regular to non-deterministic.

> The people using GenAI reap a major time and cognitive effort savings, but the task of verification is shifted to the maintainer.

The people using GenAI should be the ones doing the verification. The maintainer's job should not meaningfully change (other than the maintainer using AI to review on incoming code, of course).

Why does everyone who hears "AI code" automatically think "vibe-coded"?