Brutal that software engineering went from one of the least automatable jobs to a job that is universally agreed to be "most exposed to automation".
Was good while it lasted though.
Brutal that software engineering went from one of the least automatable jobs to a job that is universally agreed to be "most exposed to automation".
Was good while it lasted though.
I'm not sure it's that our job is the most automatable, but that the interface is the easiest to adapt to our workflow.
I have a feeling language models will be good at virtually every "sit at a desk" job in a virtually identical capacity, it's just the act of plugging an AI into these roles is non-obvious.
Like every business was impacted by the Internet equally, the early applications were just an artifact of what was an easy business decision.. e.g. it was easier to start a dotcom than to migrate a traditional corporate process.
What we will see here with AI is not the immediate replacement of jobs, but the disruption of markets with offerings that human labor simply can't out-compete.
> I'm not sure it's that our job is the most automatable
I don't know. It seems pretty friendly to automation to me.
When was the last time you wrote assembly? When was the last time you had map memory? Think about blitting memory to a screen buffer to draw a square on a screen? Schedule processes and threads?
These are things that I routinely did as a junior engineer writing software a long time ago. Most people at that time did. For the most part, the computer does them all now. People still do them, but only when it really counts and applications are niche.
Think about how large code bases are now and how complicated software systems are. How many layers they have. Complexity on this scale was unthinkable not so long ago.
It's all possible because the computer manages much of the complexity through various forms of automation.
Expect more automation. Maybe LLMs are the vehicle that delivers it, maybe not. But more automation in software is the rule, not the exception.
RAD programming held the same promise, as did UML, flow/low/no code platforms.
Inevitably, people remember that the hard part of programming isn't so much the code as it is putting requirements into maintainable code that can respond to future requirements.
LLMs basically only automate the easiest part of the job today. Time will tell if they get better, but my money is on me fixing people's broken LLM generated businesses rather than being replaced by one.
Indeed. Capacity to do the hard parts of software engineering well may well be our best indicator of AGI.
I don't think LLMs alone are going to get there. They might be a key component in a more powerful system, but they might also be a very impressive dead end.
Sometimes I think we’re like cats that stumbled upon the ability to make mirrors. Many cats react like there’s another cat in the mirror, and I wonder if AGI is just us believing we can make more cats if we make the perfect mirror.
This has been my argument as well. We've been climbing the abstraction ladder for years. Assembly -> C -> OOP ->... this just seems like another layer of abstraction. "Programmers" are going to become "architects".
The labor cost of implementing a given feature is going to dramatically drop. Jevons Paradox paradox will hopefully still mean that the labor pool will just be used to create '10x' the output (or whatever the number actually is).
If the cost of a line of code / feature / app becomes basically '0', will we still hit a limit in terms of how much software can be consumed? Or do consumers have an infinite hunger for new software? It feels like the answer has to be 'it's finite'. We have a limited attention span of (say) 8hrs/person * 8 billion.
the cost of creating a line of code dropped to zero. the ongoing cost of having created a line of code has if anything gone up.
LLMs are just another layer of abstraction on top of countless. It’s not going to be the last layer, though.
I do think software engineering is more exposed than many other jobs for multiple reasons:
There is an unimaginable amount of freely accessible training data out there. There aren't for example many transcribed therapy sessions out there.
The only thing that matters about software is that it's cheap and it sort of works. Low-quality software is already common. Bugs aren't usually catastrophic in the way structural failures would be.
Software engineers are expensive compared to many other white-collar workers.
Software engineering is completely unregulated and there is no union or lobby for software engineers. The second an LLM becomes good enough to replace you, you're gone.
Many other "sit at desk" jobs have at least some tasks that can't be done on a computer.
Software engineering feels like an extremely uncertain career right now.
I'm not so certain that non-desk jobs will be safe either. What makes the current LLMs great at programming is the vast amount of training data. There might be some other breakthrough for typical jobs - some combination of reinforcement learning, training on videos of people doing things, LLMs and old-fashioned AI.
The only thing that AI is good at is a job that someone has already done before.
So 99% of all jobs
Maybe it's just the nature of being early adopters.
Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.
In the meantime, memes aside, I'm not too worried about being completely automated away.
These models are extremely unreliable when unsupervised.
It doesn't feel like that will change fundamentally with just incrementally better training.
> These models are extremely unreliable when unsupervised.
> It doesn't feel like that will change fundamentally with just incrementally better training.
I could list several things that I thought wouldn't get better with more training and then got better with more training. I don't have any hope left that LLMs will hit a wall soon.
Also, LLMs don't need to be better programmers than you are, they only need to be good enough.
No matter how much better they get, I don't see any actual sign of intelligence, do you?
There is a lot of handwaving around the definition of intelligence in this context, of course. My definition would be actual on the job learning and reliability i don't need to second guess every time.
I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations.
Intelligence doesn't matter. To quote "Superintelligence: Paths, Dangers, Strategies":
> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.
The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence".
Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them.
There is a reason chess was "solved" so fast. The game maps very nicely onto computers in general.
A grandmaster chess playing ai is not better at driving a car than my calculator from the 90s.
Yes, that's my point. AI doesn't need to be general to be useful. LLMs might replace software engineers without ever being "general intelligence".
Sorry for not making my point clear.
I'm arguing that the category of the problem matters a lot.
Chess is, compared to self-driving cars and (in my opinion) programming, very limited in its rules, the fixed board size and the lack of "fog of war".
I think I haven't made my point clear enough:
Chess was once thought to require general intelligence. Then computing power became cheap enough that using raw compute made computers better than humans. Computers didn't play chess in a very human-like way and there were a few years where you could still beat a computer by playing to its weaknesses. Now you'll never beat a computer at chess ever again.
Similarly, many software engineers think that writing software requires general intelligence. Then computing power became cheap enough that training LLMs became possible. Sure, LLMs don't think in a very human-like way: There are some tasks that are trivial for humans and where LLMs struggle but LLMs also outcompete your average software engineer in many other tasks. It's still possible to win against an LLM in an intelligence-off by playing to its weaknesses.
It doesn't matter that computers don't have general intelligence when they use raw compute to crush you in chess. And it won't matter that computers don't have general intelligence when they use raw compute to crush you at programming.
The proof that software development requires general intelligence is on you. I think the stuff most software engineers do daily doesn't. And I think LLMs will get continously better at it.
I certainly don't feel comfortable betting my professional future on software development for the coming decades.
"It is difficult to get a man to understand something when his salary depends upon his not understanding it" ~ Upton Sinclair
Your stance was the widely held stance not just on hacker news but also by the leading proponents of ai when chatgpt was first launched. A lot of people thought the hallucination aspect is something that simply can't be overcome. That LLMs were nothing but glorified stochastic parrots.
Well, things have changed quite dramatically lately. AI could plateau. But the pace at which it is improving is pretty scary.
Regardless of real "intelligence" or not.. the current reality is that AI can already do quite a lot of traditional software work. This wasn't even remotely true if if you were to go 6 months back.
How will this work exactly?
I think I have a pretty good idea of what AI can do for software engineering, because I use it for that nearly every day and I experiment with different models and IDEs.
The way that has worked for me is to make prompts very specific, to the point where the prompt itself would not be comprehensible to someone who's not in the field.
If you sat a rando with no CS background in front of Cursor, Windsurf or Claude code, what do you suppose would happen?
It seems really doubtful to me that overcoming that gap is "just more training", because it would require a qualitatively different sort of product.
And even if we came to a point where no technical knowledge of how software actually works was required, you would still need to be precise about the business logic in natural language. Now you're writing computer code in natural language that will read like legalese. At that point you've just invented a new programming language.
Now maybe you're thinking, I'll just prompt it with all my email, all my docs, everything I have for context and just ask it to please make my boss happy.
But the level of integrative intelligence, combined with specialized world knowledge required for that task is really very far away from what current models can do.
The most powerful way that I've found to conceptualize what LLMs do is that they execute routines from huge learnt banks of programs that re-combine stored textual information along common patterns.
They're cut and paste engines where the recombination rules are potentially quite complex programs learnt from data.
This view fits well with the strengths and weaknesses of LLMs - they are good at combining two well understood solutions into something new, even if vaguely described.
But they are quite bad at abstracting textual information into a more fundamental model of program and world state and reasoning at that level.
I strongly suspect this is intrinsic to their training, because doing this is simply not required to complete the vast majority of text that could realistically have ended up in training databases.
Executing a sophisticated cut&paste scheme is in some ways just too effective; the technical challenge is how do you pose a training problem to force a model to learn beyond that.
I just completed a prototype of a non-trivial product that was vibe-coded just to test the ability and limits of LLMs.
My experience aligns largely with your excellent comment.
>But the level of integrative intelligence, combined with specialized world >knowledge required for that task is really very far away from what current >models can do.
Where LLMs excel are to put out large templates of what is needed, but they are frayed at the edges. Imagine programming as a jigsaw puzzle where the pieces have to fit together. LLMs can align the broader pieces, but fail to fit them precisely.
>But they are quite bad at abstracting textual information into a more >fundamental model of program and world state and reasoning at that level.
The more fundamental model of program is a "theory" or "mental-model" which unfortunately is not codified in the training data. LLMs can put together broad outlines based on their training data, but lack the precision in modeling at a more abstract level. For example, how concurrency could impact memory access is not precisely understood by the LLM - since it lacks a theory of it.
> the technical challenge is how do you pose a training problem to force a model > to learn beyond that.
This is the main challenge - how can an LLM learn more abstract patterns. For example, in the towers of hanoi problem, can the LLM learn the recursion and what recursion means. This requires LLM to learn abstraction precisely. I suspect LLMs learn abstraction "fuzzily" but what is required is to learn abstraction "precisely". The precision or determinism is largely where there is still a huge gap.
LLM-boosters would point to the bitter lesson and say it is a matter of time before this happens, but I am a skeptic. I think the process of symbolism or abstraction is not yet understood enough to be formalized.
Ironic to post that quote about AI considering the hype is pretty much entirely from people who stand to make obscene wealth from it.
>That LLMs were nothing but glorified stochastic parrots.
Well yes , now we know they make kids kill themselves.
I think we've all fooled ourselves like this beetle
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
for thousands of years up until 2020 anything that conversed with us could safely be assumed to be another sentient/intelligent being.
No we have something that does that, but is neither sentient or intelligent, just a (complex)deterministic mechanism.
Ive heard this described as a kind vs a wicked learning environment.
LLMs can code, but they can’t engineer IMO. They lack those other parts of the brain that are not the speech center.
[flagged]
Does it have to? Stack enough "it's 5% better" on top of each other and the exponent will crush you.
AI training costs are increasing around 3x annually across each of the last 8 years to achieve its performance improvements. Last year, spending across all labs was $150bn. Keeping the 3x trend means that, to keep pace with current advances, costs should rise to $450bn in 2025, $900bn in 2026, $2.7tn in 2027, $8.1tn in 2028, $25tn in 2028, and $75tn in 2029 and $225tn in 2030. For reference, the GDP of the world is around $125tn.
I think the labs will be crushed by the exponent on their costs faster white-collar work will be crushed by the 5% improvement exponent.
Be careful you're not confusing the costs of training an LLM and the spending from each firm. Much of that spending is on expanding access to older LLMs, building new infrastructure, and other costs.
That’s a fair criticism of my method, however model training costs are a significant cost centre for the labs. Modelling from there instead of from total expenditure only adds 2-3 years before model training costs are larger than the entire global economy.
Your math is a bit less than it should be because you doubled instead of trebled 2026
The current trained models are already pretty good enough for many things.
Is that so? Ok let the consumers decide - increase the price and let's see how many users are willing to pay the price.
They are mediocre plagiarism machines at best.
Are LLMs stackable? If they keep misunderstanding each other, it'll look more like successive applications of JPEG compression.
By all accounts, yes.
"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.
The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.
Given the number of academic papers about it, model collapse is a popular idea among the people who know a lot about AI as well.
Model collapse is something demonstrated when models are recursively trained largely or entirely on their own output. Given most training data is still generated or edited by humans or synthetic, I'm not entirely certain why one would expect to see evidence of model collapse happening right now, but to dismiss it as something that can't happen in the real world seems a bit premature.
We've found in what conditions does model collapse happen slower or fails to happen altogether. Basically all of them are met in real world datasets. I do not expect that to change.
The jpeg compression argument is still valid.
It's lossy compression at the core.
In 2025 you can add quality to jpegs. Your phone does it and you don't even notice. So the rhetorical metaphor employed holds up, in that AI is rapidly changing the fundamentals of how technology functions beyond our capacity to anticipate or keep up with it.
> add quality to jpegs
Define "quality", you can make an image subjectively more visually pleasing but you can't recover data that wasn't there in the first place
You can if you know what to fill from other sources.
Like, the grill of a car. If we know the make and year, we can add detail with each zoom by filling in from external sources
This is an especially bad example, a nice shiny grille is going to be strongly reflecting stuff that isn't already part of the image (and likely isn't covered well by adjacent pixels due the angle doubling of reflection).
Is this like how crypto changed finance and currency
I don't think it is.
Sure, you can view an LLM as a lossy compression of its dataset. But people who make the comparison are either trying to imply a fundamental deficiency, a performance ceiling, or trying to link it to information theory. And frankly, I don't see a lot of those "hardcore information theory in application to modern ML" discussions around.
The "fundamental deficiency/performance ceiling" argument I don't buy at all.
We already know that LLMs use high level abstractions to process data - very much unlike traditional compression algorithms. And we already know how to use tricks like RL to teach a model tricks that its dataset doesn't - which is where an awful lot of recent performance improvements is coming from.
Sure, you can upscale a badly compressed jpeg using ai into something better looking.
Often the results will be great.
Sometimes the hallucinated details will not match the expectations.
I think this applies fundamentally to all of the LLM applications.
And if you get that "sometimes" down to "rarely" and then "very rarely" you can replace a lot of expensive and inflexible humans with cheap and infinitely flexible computers.
That's pretty much what we're experiencing currently. Two years ago code generation by LLMs was usually horrible. Now it's generally pretty good.
I think you are selling yourself short if you believe you can be replaced by a next token predictor :)
I think humans who think they can't be replaced by a next token predictor think too highly of themselves.
LLMs show it plain and clear: there's no magic in human intelligence. Abstract thinking is nothing but fancy computation. It can be implemented in math and executed on a GPU.
LLMs have no ability to reason whatsoever.
They do have the ability to fool people and exacerbate or cause mental problems.
LLMs are actually pretty good at reasoning. They don't need to be perfect, humans aren't either.
what's actually happening is all your life you've been told by experience if something can talk to you is that it must be somewhat intelligent.
Now you get can't around that this might not be the case.
You're like that beetle going extinct mating with beer bottles.
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
"What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.
We've already found that LLMs implement the very same type of abstract thinking as humans do. Even with mechanistic interpretability being in the gutters, you can probe LLMs and find some of the concepts they think in.
But, of course, denying that is much less uncomfortable than the alternative. Another one falls victim to AI effect.
> "What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.
People have been arguing this is not the case for at least hundreds of years.
Considering we don't understand consciousness at ALL or how humans think, you might want to backtrack your claims a bit.
Any abstraction you're noticing in an LLM is likely just a plagiarized one
Why isn't it then
I as a human being can of course not be replaced by a next token predictor.
But I as a chess player can easily be replaced by a chess engine and I as a programmer might soon be replaceable by a next token predictor.
The only reason programmers think they can't be replaced by a next token predictor is that programmers don't work that way. But chess players don't work like a chess engine either.
this boring reductionist take on how LLMs work is so outdated that I'm getting second hand embarassment.
Sorry, I meant a very fancy next token predictor :)
Lots of technology is cool if you get to just say “if we get rid of the limitations” while offering no practical way to do so.
It’s still horrible btw.
Hallucination has significantly decreased in the last two years.
I'm not saying that LLMs will positively replace all programmers next year, I'm saying that there is a lot of uncertainty and that I don't want that uncertainty in my career.
Pretty crazy, and all you have to do is assume exponential performance growth for as long as it takes.
If it gets to the point where I can no longer find a tech job I am just going to buy a trailer, live somewhere cheap, and just make money doing odd jobs while spending most of my time programming what I want. I don't want to participate in a society where all I have for job options is a McJob or some Amazon warehouse.
>Buy a trailer, live somewhere cheap, do odd jobs
Unrelated to the discussion, but I love these kinds of backup plans. I've found that most guys I talk to have one. Just a few days ago a guy was telling me that, if his beloved wife ever divorces him, then he'd move to a tropical island and become a coconut seller.
(My personal plan: find a small town in the Sonoran Desert that has a good library, dig a hole under a nice big Saguaro cactus, then live out my days reading library books in my cool and shady cave.)
Is it hard to date living under a cactus?
Yes, that's where living under a date palm is better.
Nah dating under a cactus is easy: just don't be a prick.
it must be easier than dating on top of a cactus
The future seems very uncertain right now and we are living in weird times. Its always a good idea to have a backup plan in case your career path doesn't work out!
Mine is forrest fire fighter. Surely with climate change there will not be a shortage of work, and while dangerous and bad for you, it seems kind of fun.
> he'd move to a tropical island and become a coconut seller.
Is there a visa for that? Doesn't seem feasible unless he lives in a country that has a tropical island already.
Due the compact free association, US citizen can permanently settle to live and work in Micronesia with no visa or even any real checks other than a quick look at the passport.
That's plan C, plan B is to one person SAAS a better app than my current company makes.
This is the best thing engineers can do. I moved to building as a solo founder. I am building an LLM enabled coding product and I teach. I'm hosting a session on Claude Code today, 134 guests signed up. I'm gradually planning to make money teaching for a few months while building the product.
until you realize the success of a business is way more dependent on non-engineering skills
That's actually a good idea. Now I just need to come up with an idea for an SAAS app. I was thinking originally or making one of the games on my project backlog and seeing how much I could make off it. Or creating one of the many idea I have for websites and webapps and see where they go.
Is it hard to date with a trailer?
Would be more difficult depending on where you live. My plan was to talk to others online and see if I could find someone willing to live such a simple life with me, maybe starting with an LDR first (I'm sort of doing that already)
Not if it has a hitch.
Beginning to suspect this person is living in a trailer or cave and collecting info for their UniqueDating SaaS.
[dead]
I'd argue that, out of white collar jobs, it is actually one of the least automatable still. I.e. the rest of the jobs are likely going to get disrupted much faster because they are easier to automate (and have been the target of automation by the software industry in the past century). Whatever numbers were seeing now may be too early to reflect this accurately.
Also there are different metrics that are relevant like dollar count vs pure headcount. Cost cutting targets dollars. E.g. entry level developers are still expensive compared to other jobs.
Its the least regulated (not at all). So it will be the first to be changed.
AI lawyers? Many years away.
AI civil engineers? Same thing, there is a PE exam that protects them.
You don’t need to perfect AI to the point of becoming credentialed professionals to gut job markets— it’s not just developers, or creative markets. Nobody’s worried that the world won’t have, say, lawyers anymore — they’re worried that AI will let 20% of the legal workforce do 100% of the requisite work, making the skill essentially worthless for the next few decades because we’d have way too many lawyers. Since the work AI does is largely entry-level work, that means almost nobody will be able to get a foothold in the business. Wash, rinse, repeat to varying levels across many white collar professions and you’ve got some real bad times brewing for people trying to enter the white collar workforce from now on— all without there being a single AI lawyer in the world.
Same thing for doctors. Turns out radiologists are fine, it's software engineers that should be scared.
We might end up needing 20% or so less doctors, because all that bureaucracy can be automated. A simple automated form pre-filler can save a lot of time. It’s likely that hospitals will try saving there.
You know the difference between doctors and programmers? One have a regulated profession and lobby, the other have neither. Actually, all the other have is the richest amount of open training data for ai companies among all professions (and it's not medicine)
Oh really?
https://medium.com/backchannel/how-technology-led-a-hospital...
I'm sure those who lost a job to software at some point are feeling a great deal of sympathy for developers who are now losing out to automation.
Despite being the target of a lot of schadenfreude, most software developers aren't working on automation.
Nice watching it tear down recruiters though.
Most "Software Engineering" is just applying the same code in slightly different contexts. If we were all smarter it would have been automated earlier through the use of some higher-level language.
> If we were all smarter
Its not really an intelligence thing. You could have the most intelligent agent, but if the structural incentives for that agent are for example, "build and promote your own library for X for optimal career growth.", you would still have massive fragmentation. And under the current rent-seeking capitalist framework, this is a structural issue at every level. Firefox and Chrome? Multiple competing OSes? How many JS libraries? Now sure, maybe if everyone was perfectly intelligent _and_ perfectly trusting, then you could escape this.
Too bad engineers were “too important” to unionize because their/our labor is “too special .”
I think you could find 10,000 quotes from HN alone why SDEs were immune to labor market struggles that would need a union
Oh well, good luck everyone.
I'm not necessarily opposed to unionization in general but it's never going to save many US software industry jobs. If a unionization drive succeeds at some big tech company then the workers might do well for a few years. But inevitably a non-union startup competitor with a lower cost structure and more flexible work rules will come along and eat their lunch. Then all the union workers will get laid off anyway.
Unionization kind of worked for mines and factories because the company was tied to a physical plant that couldn't easily be moved. But software can move around the world in milliseconds.
Indeed, just look at the CGI VFX industry of Hollywood. US invented it and was the leader for a long time, but now it has been commodified, standardized and run into the ground, because union or not, you can't stop US studios form offshoring the digital asset work to another country where labor is 80% cheaper than California and quality is 80% there. So the US is left with making the SW tools that VFX artist use, as the cutting edge graphics & GPU knowhow is all clustered there.
Similarly, a lot of non-cutting edge SW jobs will also leave the US as tooling becomes more standardized, and other nations upskill themselves to deliver similar value at less cost in exchange for USD.
Unions _can_ protect against this, but they have to do it via lobbying the government for protectionism, tariffs, restricting non-union competition etc.
This was when programmers were making software to time Amazon worker's bathroom breaks so believing "this could never happen to me" was probably an important psychological crutch.
Saying “programmers” did this is about as useful as saying humans did it.
This is, if true, a fundamental shift in the value of labor. There really isn’t a non-Luddite way to save these jobs without destroying American tech’s productivity.
That said, I’m still sceptical it isn’t simply a reflection of an overproduction of engineers and a broader economic slowdown.
Yeah I agree that outsourcing and oversupply are the real culprits and AI is a smoke screen. The outcome is the same though.
> outcome is the same though
Not really. If it’s overproduction, the solution is tighter standards at universities (and students exercising more discretion around which programmes they enroll in). If it’s overproduction and/or outsourcing, the solutions include labour organisation and, under this administration, immigration curbs and possibly services tariffs.
Either way, if it’s not AI the trend isn’t secular—it should eventually revert. This isn’t a story of junior coding roles being fucked, but one of an unlucky (and possibly poorly planning and misinformed) cohort.
It can be oversupply/outsourcing and also secular: You can have basically chronic oversupply due to a declining/maturing industry. Chronic oversupply because the number of engineers needed goes down every year and the pipeline isn't calibrated for that (academia has been dealing with this for a very long time now, look up the postdocalypse). Outsourcing, because as projects mature and new stuff doesn't come along to replace, running maintenance offshore gets easier.
Software isn't eating the world. Software ate the world. New use cases have basically not worked out (metaverse!) or are actively harmful.
Unions work in physical domains that need labor “here and now”, think plumbers, electricians, and the like. You can’t send that labor overseas, and the union can control attempts at subversion via labor force importation. But even that has limitations, e.g. union factory workers simply having their factory shipped overseas.
Software development at its core can be done anywhere, anytime. Unionization would crank the offshoring that already happens into overdrive.
We're not "too important." All a union would do is create extra problems for us.
There are two possibilities:
a) This is a large scale administrative coordination problem
b) We don't need as many software engineers.
Under (a) unionizing just adds more administrators and exacerbates the problem, under (b) unions are ineffective and just shaft new grads or if they manage to be effective, kills your employer (and then no one has a job.)
You can't just administrate away reality. The reason SWEs don't have unions is because most of us (unlike blue collar labor) are intelligent enough to understand this. I think additionally there was something to be said about factory work where the workers really were fungible and it was capital intensive, software development is almost the polar opposite where there's no capital and the value is the theory the programmers have in their head making them a lot less fungible.
Finally we do have legal tools like the GPL which do actually give us a lot of negotiating power. If you work on GPL software you can actually just tell your employer "behave or we'll take our ball and leave" if they do something stupid.
You said: All a union would do is create extra problems for us.
Then you said:
a) This is a large scale administrative coordination problem
Pray tell: what is it a union does other than the latter?
Or is your position that “union” is some narrowly defined, undifferentiated structural artifact of a specific legal system?
Unions can only prevent automation up to a point. Really the only thing that could have reasonably prevented this would have been for programmers to not produce as much freely accessible training data (formerly known as "open source software").
Exactly. I am always so impressed by the fact that developers never see that open source is essentially them giving away free labor to giant corporations. Developers basically programmed their way out of a job, for free. It's the only profession that is proud to have its best work done on unpaid time and used for free by big corporations.
So what your argument is we're so special that we deserve to hold back human progress to have a privileged life? If it's not that what would you want a union to do in this situation?
I’d prefer that my family are financially stable over “human progress”. One benefits me and the other benefits tech companies. Easy choice.
If our ancestors had thought like that we'd all be very busy and "stable" doing subsistence farming like we were doing 10,000 years ago.
Better our children never have to work because the robots do everything and they inherited some ownership of the robots.
Do you really believe that all technological progress has bettered humanity? Where’s the four day work week we were promised? I thought automation was supposed to free us from labor.
I don't think all progress has benefitted humanity but I do think we've never worked less while earning more than the present.
I like human progress. I don’t like the apparent end goal that the entire wealth of the planet belongs to a few thousand people while the rest of us live in the mud.
Unions won’t solve this for you. If a company just decides they have enough automation to reduce union workforce it can happen the next time contracts get negotiated.
Either way, there are layoff provisions with union agreements.
Tell that to dock workers, who have successfully delayed the automation of ports to the extent we see them automated in e.g. the PRC [0].
Hell, they're even (successfully) pushing back against automated gates! [1]
[0] https://www.cnn.com/2024/10/02/business/dock-workers-strike-...
[1] https://www.npr.org/2024/10/03/nx-s1-5135597/striking-dockwo...
Isn't that just delaying the inevitable? Yangshan Deep-Water Port in Shanghai is one of the most automated ports. Considering there are more people in China than in the US, China still automated their port.
I'm not making a value judgment on the specific case of dock workers, I'm rather saying that unions can and do prevent automation. If Software Devs had unionized earlier, a lot of positions would probably still be around.
The dock owner may not have a lot of alternatives to negotiating with the union. If devs unionize, the work can move.
In Hollywood, union bargaining bought some time at least. Unions did mandate limits on the use of AI for a lot of the creation process.
AI is still used in Hollywood but nobody is proud of it. No movie director goes around quoting percentages of how many scenes were augmented by AI or how many lines in the script were written by ChatGPT.
Unions would just delay the inevitable while causing other downsides like compressing salary bands, make it difficult to fire non-performers, union fees, increasing chance of corruption etc.
For a recent example:
> Volkswagen has an agreement with German unions, IG Metall, to implement over 35,000 job cuts in Germany by 2030 in a "socially responsible" way, following marathon talks in December 2024 that avoided immediate plant closures and compulsory layoffs, according to CNBC. The deal was a "Christmas miracle" after 70 hours of negotiations, aiming to save the company billions by reducing capacity and foregoing future wage increases, according to MSN and www.volkswagen-group.com.
Unions wouldnt stop any of this but professionalization would
I mean, I still don't want to unionize with the guys who find `git` too complicated to use (which is apparently the majority of HN). Also, you guys all hate immigrants which is not my vibe, sorry.
Then don’t complain when some other group treats you the same
I really hope nobody had themselves convinced that software engineering couldn't be automated. Not with the code enterprise has been writing for decades now (lots and lots and lots of rules for gluing state to state, which are extremely structured but always just shy of being so structured that they were amenable to traditional finite-rule-based automation).
The goal of the industry has always been self-replacement. If you can't automate at least part of what you're working on you can't grow.
... unfortunately, as with many things, this meshes badly with capitalism when the question of "how do you justify your existence to society" comes up. Hypothetically, automating software engineering could lead to the largest open-source explosion in the history of the practice by freeing up software engineers to do something else instead of toil in the database mines... But in practice, we'll probably have to get barista jobs to make ends meet instead.
The experiences people are having when working with big, complex codebases don’t line up with your gloomy outlook. LLMs just fall apart beyond a certain project size, and then the tech debt must be paid.
Is it gloomy? I personally liken it to inventing the washing machine instead of doing laundry by hand, beating it against a washboard, for another hundred years.
If you want to know what will happen to software engineers in the US just follow the path of US factory workers in the 90s.
Universally? Nah.
It’s just engineers getting high on their own supply. All the hype men for the software are software engineers (or adjacent.)
Frankly, any time I see research indicating software engineering is at a high risk of being automated, I outright dismiss it as pseudo science. It ain’t happening with current tech.
[flagged]
[dead]