any human can read the code an AI produces.

Nope, not anymore. Many already forgot how to do that and it's not a joke.

And putting aside the vanishing skill, there is also an issue of volume.

I agree that the problem is volume, even more so than correctness.

All that LLMs and other generative models have done is enable an order of magnitude more stuff to be created cheaply. This then puts the onus and cost on the consumer of that output, hence why everyone is exhausted after a day of work that just involves looking over output. This volume of output will cause people to stop looking at all of the output and just trust the randomly generated code, and in time the quality will suffer.

So... Our jobs are safe then? I mean, assuming we don't also atrophy to the same extent as the 'many'?

It's the "our jobs are lost" attitude that is part of problem. Is not about that. Is more quality thinking, is daring, not fearing or hoping

Haven't seen to many people feeding their children with "daring".

I'm just saying that I already see that people are outsourcing all the thinking to the models - not only code generation and reviews, but even design - the part that "senior engineers" without imagination think only they are capable of doing.

It's worrying how much trust is being put in those systems. And my worry is not about the job anymore, but our future in general.

It's a bit of a weird place to be in as a senior engineer who has spent 2 decades perfecting his craft.

So, on one hand, I'm also kinda sad and how quickly we've thrown the guardrails away, but on the other -- it's... Well. It's just work.

Turns out, no one ever really cared how elegant or robust our code was and how clever we were to think up some design or other, or that we had an eye on the future; just that it worked well enough to enable X business process / sale / whatever.

And now we're basically commoditised, even if the quality isn't great, more people can solve these problems. So, being honest, I think a lot of my pushback is just a kinda internal rebellion against admitting that actually, we're not all that special after all.

I'm just glad I got to spend 20 years doing my hobby professionally, got paid really well for it, and often times was forced to solve complicated problems no one else could -- that kept me from boredom.

I think the shift we are seeing now, as 'previously' knowledge workers is that work becomes a lot more like manual labour than what we've really been doing up until now. When there's no 'I don't know' anymore, then you're not really doing knowledge work, right?

I guess I'll just ride the wave, spew out LLM crap at work, and save the craft for some personal projects, I'll certainly have the capacity now work is a no-op.

Yeah, but the thing is, it's not "just work". Software now has really big impact on the world and actual lives.

In a corporate world, we are typically detached from real world consequences and looking at people around me, people really don't think about such things - but I do. And I really care, because "relaxed" standards might result in errors that amount to stuff like identity thefts, or stolen money, shit like this, even on the smallest scale.

Obviously we can't prevent everything, but it seems like we, as industry, decided to collectively YOLO and stop giving shit at all. And personally I don't like that it is me who is losing sleep over this, while people who happily delegate all their thinking over to LLMs sleep better than ever now.

Yeah that's a tough spot to be in; I think though, your responsibility really ends with you at work, unless you're very high up on the management chain.

Keep it simple right; in everything you do, make things a bit better than you found them. It's enough. You're never going to win the fight to get everyone (or maybe even ANYONE depending how messed up your org is) to care; so why lose sleep on things you can't change?

At least, that's what I started doing some years ago by now having lost lots of those fights, and I'm sleeping fine again.

I think those of us who have years of experience under our belt our safe. If we're older the knowledge is ingrained and atrophy of this knowledge will be limited based on the fact that it's already "imprinted" onto our brains.

Our futures are safe in this sense, in fact it's even beneficial as we may be the last generation to have these skills. Humanities future on the other hand is another open question.

You could say the same thing about compiled code, actually it's worse because anything a compiler spits out is very hard to understand even for those who understand assembly.

You don't need to look at the entire program at the assembly level to figure out parts that you want to optimise or prove for correctness. You do need to look at all the code the LLM generates in order to understand it.

You can learn to understand the patterns that compilers spit out and there are many tools out there to aid in that understanding. You can't learn to understand what an LLM spits out because by design it is non-deterministic and will vary in form and function for each pull of the lever.

You can learn to understand how high level concepts in code map down to assembly language and how compilers transform constructs in one language to another. You can't know that about LLMs because they generate non-deterministic output based on processing of huge low-precision tables.

It's not even a close comparison.

Have you tried to shift through a whole lot of vibe coded slop? It’s really mentally draining to see all of the really bad techniques they fall back on just to brute force a solution.

for now. some people seem to think we should make ai native programming languages and just let them be black boxes. which is a bad idea imo

How can you read a language you didn't learn?

Unless people can't think without the AI.

here's a tip, it would really help if you put yourself into a Ralph loop before posting comments.