I am bit tired of such discussions.

I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.

Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.

> What matters in the end is that this tech is not to empower a common person (although it could).

How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person". It lowers barriers!

A lot of the anti-AI sentiment on HN concerns people losing their jobs. I don't think this will happen: programmers who know what they're doing are going to be way, way more effective at using AIs to generate code than others.

But even if it is true and we do see job losses in tech: are software devs really "in a precarious position"? Do they really qualify as "those that have little"? Seems like a fantasy to me. Computer programmers have done great over the past 30 years.

More broadly, anti-AI sentiment comes from people who dislike change. It's hard to argue someone out of that position. You're allowed to prefer stasis. But the world moves on and I think it's best to remain optimistic, keep an open mind, and make the most of it.

> I don't think this will happen

Block just laid off 40% of their company citing AI.

Tech companies have been laying off employees for a while now. I think it's mostly due to pandemic overhiring and higher interest rates but I suppose we'll see.

> Block just laid off 40% of their company

Because the company was being horribly run and over hired and "pivoted to blockchain" for no fucking reason.

> citing AI.

Because it's 2026 and they thought that would work to bullshit a few people about point one, which apparently it did.

> It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position

Could be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.

It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something”

> It could also end up freeing us from every commercial dependency we have

Yeah, companies that develop and push this tech definitely have this in mind.

> I don’t know where you’re going with “I’m unhappy this is happening and someone should do something

I am not surprised because I didn't write anything like it.

> > I don’t know where you’re going with “I’m unhappy this is happening and someone should do something

> I am not surprised because I didn't write anything like it.

You're right, there was no "someone should do something" call to action in your original comment.

It's also worth nothing that the "our" in that sentence is just SWEs, who are a pretty small group in the grand scheme of things. I recognize that's a lot of HN, but still bears considering in terms of the broader impact outside of that group.

I'm a small business owner, and AI has drastically increased my agency. I can do so much more - I've built so many internal tools and automated so many processes that allow me to spend my time on things I care about (both within the business but also spending time with my kids).

It is, fortunately, and unfortunately, the nature of a lot of technology to disempower some people while making lives better for others. The internet disempowered librarians.

> It's also worth nothing that the "our" in that sentence is just SWEs

It isn't, it just a matter of seeing ahead of the curve. Delegating stuff to AI and agents by necessity leads to atrophy of skills that are being delegated. Using AI to write code leads to reduced capability to write code (among people). Using AI for decision-making reduces capability for making decisions. Using AI for math reduces capability for doing math. Using AI to formulate opinions reduces capability to formulate opinions. Using AI to write summaries reduces capability to summarize. And so on. And, by nature, less capability means less agency.

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them

Not to mention utilizing AI for control, spying, invigilation and coercion. Do I need to explain how control is opposed to agency?

  It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.

Lmfao LLM's can barely count rows in a spreadsheet accurately, this is just batshit crazy.

edit: also the solution here isn't that every one writes their own software (based on open source code available on the internet no doubt) we just use that open source software, and people learn to code and improve it themselves instead of off-loading it to a machine

This is one of those things where people who don't know how to use tools think they're bad, like people who would write whole sentences into search engines in the 90s.

LLMs are bad at counting the number of rows in a spreadsheet. LLMs are great at "write a Python script that counts the number of rows in this spreadsheet".

Do you think asking any LLM in the next 100 years to "write a Python script that generates an OS" will work?

What happens when they decide it's a national security threat and an act of domestic terrorism to use AI to undermine commercial dependencies? We're all acting like AI isn't being invented within the context of and used by a fascist regime.

At some point, if most people lose their jobs, you have no market to sell your services to. So, either, new jobs have to be created in order to keep the capitalism machine running, or you have to provide for the needs of every human being from whatever you're doing with your AI. Otherwise, a lot of hungry people revolt and you have violence against these businesses.

I think new jobs will be created because AI is always limited by hardware and its current capabilities. Businesses, in order to compete, want to do things their competitors aren't currently doing. Those business needs always go beyond the current technological capabilities until the tech catches up and then they lather, rinse, repeat.

Economy is going to collapse with the war anyways. (https://www.youtube.com/watch?v=4Ql24Z8SIeE&t=247s)

> Otherwise, a lot of hungry people revolt and you have violence against these businesses.

With shrinking and aging population?

Demand full automation. Demand universal basic income. Notice how the later is nearly absent from the conversation.

Another distraction is AGI that which is a danger to humanity- the only danger is people...

> the only danger is people...

Simply put, no it is not.

But on the reverse, the first danger with AI is people.

Over the longer term it will look like this. The rich 'win' the world by using AI to enslave the rest of mankind and claim ownership over everything. This will suck and a lot of us will die.

The problem is this doesn't solve the greed that cause the problem in the first place. The world will still be limited in a resources of something which will end with the rich in a dick measuring contest and to win that contest they will put more and more power in AI and they connive and fight each other. Eventually the AI has enough power that it kills us all, intentionally or not.

We'll achieve nearly unlimited capability long before we solve the problem of unlimited greed and that will spell our end.

I guess you didn't read the article?

Keep guessing