I perfectly agree with antirez about the importance of AI and the benefit for coders. In the last month we saw a big jump and we all are in the middle of the biggest technological revolution since the internet. He summarised the benefits, but omitted the rest.
Why we don't have to be anti-AI? Why in his opinion is just "HYPE"? I didn't find any answer in his post. He doesn't analyse the cons of AI and explain why some people might be anti-AI. He skipped the hard part and wrote a mild article that re-publish the narrative that is already getting spread on every social media.
Edit for clarification: I don't consider anti-AI the people that think LLMs don't work, they are wrong. I consider anti-AI people that are worried how this technology will impact society in so many ways that are hard to predict, including the future of software engineering.
From purely business and career perspective, being anti-AI will be a self-own unless you work for niche companies that have the anti-AI stance. Yes, they exist. But if a company is building, supporting, or consulting any product, where timing matters and there’s competition (which is super majority), it’ll be in their best interest to nudge their employees to speed up via AI.
I do think at least being proficient right now with the LLMs will help you with whatever comes next, just because you’ll build the intuition around it. Being anti-AI might negatively affect one’s employability, and especially the younger ones who don’t have seniority or connections over the decades.
> From purely business and career perspective, being anti-AI will be a self-own
From purely business and career perspective being anti-blockchain/NFT/online gambling/adtech/fascism (at least for now in US)/etc. is a self-own, too.
I'm sure everybody making a choice against that knows it.
Thankfully purely business and career perspectives don't dictate everything.
There are plenty of non-blockchain, non-NFT, non-online gambling, non-adtech, non-facist software jobs. In fact, the vast majority of software jobs are. You can refuse to work with all of these things and not even notice a meaningful difference in career opportunities.
If you refuse to work with AI, however, you're already significantly limiting your opportunities. And at the pace things are going, you're probably going to find yourself constrained to a small niche sooner rather than later.
If your argument is that there are more jobs that require morally dubious developments (stealing people's IP without licensing it, etc.) than jobs that don't, I don't think that's news.
There's always more shady jobs than ethically satisfying ones. There's increasingly more jobs in prediction markets and other sorts of gambling, adtech (Meta, Google). Moral compromise pays.
But if you really think about it and set limits on what is acceptable for you to work on (interesting new challenges, no morally dubious developments like stealing IP for ML training, etc.) then you simply don't have that FOMO of "I am sacrificing my career" when you screen those jobs out. Those jobs just don't exist for you.
Also, people who tag everybody like that as some sort of "anti-AI" tinfoilhatters are making a straw man argument. Most people with an informed opinion don't like the ways this tech is applied and rolled out in ways that is unsustainable and exploitative of ordinary people and open-source ecosystem, the confused hype around it, circular investment, etc., not the underlying tech on its own. Being vocally against these matters does not make one an unemployable pariah in the slightest, especially considering most jobs these days build on open source and being anti license-violating LLMs is being pro sustainable open-source.
> There's always more shady jobs than ethically satisfying ones. There's increasingly more jobs in prediction markets and other sorts of gambling, adtech (Meta, Google). Moral compromise pays.
I would say, this is not about the final product, but a way of creating a product. Akin to writing your code on TextPad vs. using VSCode. Imo, having a moral stance on AI-generated art is valid, but AI-generated code isn't, just because I don't consider "code" "art".
I've been doing it for about 20 or so years at this point, throughout literally every stage of my life. Personally, I'd judge a person who is using AI to copy someone's art, but if someone is using AI to generate code gets a pass from me. That being said, a person who considers code as "art" (I have friends like that, so I definitely get the argument!), would not agree with me.
> Most people with an informed opinion don't like the ways this tech is applied
Yeah, I'm not sure if this tracks? I don't think LLMs are good/proficient as a tool for very specialized or ultra-hard tasks, however for any boilerplate-coding-task-and-all-CRUD-stuff, it would speed up any senior engineer in task completion.
> I would say, this is not about the final product, but a way of creating a product.
It is the same logic as not wanting to use some blockchain/crypto-related platform to get paid. If you believe it is mostly used for crime, you don't want to use it to get paid to avoid legitimizing a bad thing. Even if there's no doubt you will get paid, the end result is the same, but you know you would be creating a side effect.
If some way of creating a product supports something bad (and simply using any LLM always entails helping train it and benefit the company running it), I can choose another way.
> There's always more shady jobs
That is because your views appear to align with staunch progressives. From rejecting conservative politics ("fascism"), AI, advertising, and gambling.
From my side the only thing I would be hesitant about is gambling. The rest is arguably not objectively bad but more personal or political opinion from your side.
There seems to be some confusion. I wouldn't call conservative politics as a whole fascist, that's your choice of words. I doubt that "anti-AI progressive" is a thing too.
> The rest is arguably not objectively bad but more personal or political opinion from your side.
Nothing is objectively bad. Plenty of people argue that gambling should be legal if anything on the basis of personal freedom. All of this is a matter of personal choice.
(Incidentally, while you are putting people in buckets like that, note that one person very much can be similtaneously against gambling and drug legalization and be pro personal freedom open-source libertarian maximalist. Things are much more nuanced than “progressive” vs. “conservative”, whatever you put in those buckets is on you.)
That's fair enough.
It is just from my experience that political discussions online are very partisan. "fascism" in relation to the current US government combined with anti-AI sentiment is almost always a sure indicator for a certain bucket of politics.
Maybe I am spending too much time on Reddit.
To play devil's advocate: all the people using AI are not being significantly more productive on brownfield applications. If GP manages to find a Big Co (tech or non tech) which doesn't precisely bother about AI usage and just delivering features, and the bottleneck is not software dev (as is the case in majority of old school companies), he/she would be fine.
If your bottleneck is not typing speed, you'll be fine.
There is no hard part. The anti-AI position has simply become trite. The idea is that agentic coding does not work. Today, it does work.
Some people are also opposed because of the negative externalities when building and running AI systems (environmental consequences, intellectual property theft), even if they understand that agentic coding "works". This is a valid position.
I have not seen those arguments in the context of what I would consider anti-hype. But in any case: There are certainly issues attached to usage of AI more generally.
It only works for languages and frameworks that are already in the training data (duh). It still is mostly useless when you need to create something from scratch in an unstable language.
That, and you can’t also get the amazing results if you’re poor or have bad internet.
Good thing almost all of programming falls into the former. Most of the economy runs on well defined languages. Billions and billions of dollars.
Not true. I built some tools in Hare, which almost certainly isn’t in the training data to any significant extent. It was more work than having it build Go or Rust, but it got it done. It had to curl the docs a fair bit.
Opus 4.5 and update your priors. This was certainly true >6months back and is no longer the case
We are using the latest stuff. Our experience is still not great.
Why do you guys always assume we don't as though the oldest models are easy to use accidentally
I have a feeling that the HN hypebeasts have a lot of overlap with the folks that previously used to copy/paste blindly from StackOverflow.
It’s an easy deflection. Dismiss any opinions because you’re using it wrong or not the latest.
Good for anything >= 1 month old.
Use other nonsense fear inducing argument in the mean time, continue gathering gobs of VC money, get your bag, continue till the bubble pops.
In all fairness, and putting hype and anti-hype aside, I’m really interested to see the actual value of LLM/agent services after the VC money subsidies dry out. Would people we willing to pay for services at 10x the current price?
I read the same exact thing 6 months ago.
Yeah bro thanks for the tip and few shillings to you good sir. I was here still using GPT 2 because they said GPT 3 might be too dangerous.
That's true for most people too. You are trying too hard.
It works for some things, not everything.