I'm tired of the anthropomorphization marketing behind AI driving this kind of discussion. In a few years, all this talk will sound as dumb as stating "MS Word spell checker will replace writers" or "Photoshop will replace designers".

We'll reap the productivity benefits from this new tool, create more work for ourselves, output will stabilize at a new level and salaries will stagnate again, as it always happens.

Generative AI is replacing writers, designers, actors, ... it is nothing like just a spell checker or Phtoshop.

Everyday, I see ads on YouTube with smooth-talking, real-looking AI-generated actors. Each one represents one less person that would have been paid.

There is no exact measure of correctness in design; one bad bit does not stop the show. The clients don't even want real art. Artists sometimes refer to commercial work as "selling out", referring to hanging their artistic integrity on the hook to make a living. Now "selling out" competes with AI which has no artistic integrity to hang on the hook, works 24 hours a day for peanuts and is astonishingly prolific.

> Everyday, I see ads on YouTube with smooth-talking, real-looking AI-generated actors. Each one represents one less person that would have been paid.

Were AI-generated actors chosen over real actors, or was the alternative using some other low-cost method for an advertisement like just colorful words moving around on a screen? Or the ad not being made at all?

The existence of ads using generative AI "actors" doesn't prove that an actor wasn't paid. This is the same logical fallacy as claiming that one pirated copy of software represents a lost sale.

Yes, this. I recently used Midjourney to generate a super-convincing drone shot video for a presentation. The alternative would have been a stock photo.

Probably took me the same amount of time to generate a pleasing video as I would have spent browsing Shutterstock. Only difference is my money goes to one corporation instead of the other.

As far as the video is concerned, it adds a bit of a wow factor to get people interested, but ultimately it's the same old graphs and bullet points with words and numbers that matter. And those could just as well have been done on an overhead transparency in 1987.

A) J. Crew and others are using AI models instead of real models. Retail modeling was steady pay-the-bills work for models and actors and is actively being replaced by AI image generators— sometimes trained on images of a particular model they’re no longer paying. Writers and stock photographers are in much worse shape.

B) Even in cases where AI actors are used where there wouldn’t have been actors before, the skillset is still devalued, and even that modest insulation for higher-end work is almost certainly temporary. Someone doing a worse version of what you do for 1% of your pay affects the market, and saving 99% is great incentive for companies to change their strategy until the worse version is good enough.

It means that being a live actor is less of a differentiator. Of course great movie stars will remain, playing live, or animating computer characters, no matter. But simpler works like ads featuring a human now become more accessible.

Among other things, this will remove most entry-level jobs, making senior-level actors more rare and expensive.

I think this means that personal branding is going to get even more important than it already is (for example, people watching movies specifically because of Ryan Reynolds, or avoiding them because of Jared Leto)

It is likewise a fallacy that no pirated copy of software represents a lost sale.

Use of AI is exerting a downward pressure on artists and designers to get paid.

It's not true that AI is only servicing the pent-up demand for that kind of work from clients who would never have paid people to do it.

It's really both effects happening at once. AI is just like the invention of the assembly line, or the explosion of mass produced consumer packaged goods starting from the first cotton gin. Automation allows a massive increase in quantity of goods, and even when quantity comes with tradeoffs to quality vs artisanally produced goods, they still come to dominate. Processed cheese or instant coffee is pretty objectively worse that the 'real' thing, but that didn't stop cheap mass production still made those products compelling for many million/billion of consumers.

You can still find a tailor who will hand make you a bespoke clothing or sew your own clothes yourself (as even the boomer generation often did growing up), but tailored clothing is a tiny fraction of the amount of clothing in circulation. Do tailors and artisanal cheese makers still exist? Yep, they are not extinct. But they are hugely marginalized compared to machine-made alternatives.

I’m not sure if your statements are actually correct. What you are implying is that there are fewer tailors today than in the past. And I’m not sure if that holds. I’m not even sure that their relative position on the income ladder has deteriorated.

In the time before automated T-shirt production, almost nobody bought clothes. They were just far too expensive. There were of course people that did. And those paid extremely well. But those kinds of tailors still exist!

At the same time, I do think that the comparison is less than apt. And a better one would be comparing it to the fate of lectors and copywriters. A significant amount of those have been superseded by spellchecking tools or will be by AI “reformulations”.

Yet even here I’m not sure if those jobs have seen a significant decline in absolute numbers. Even while their relative frequency kind if obviously tends to 0

the crazy thing is, I can get locally-roasted beans that are single-origin microlots from all over the world, in part because of the coffee boom that was a result of instant coffee and the desire for better.

I agree with your sentiment. But where I struggle is: to what degree do each of those ads “represent one less person who would have been paid” versus those that represent one additional person who would not be able to afford to advertise in that medium.

Of course that line of reasoning reduces similar to other automation / minimum wage / etc discussions

It reminds me of the piracy lawsuits that claimed damages as if every download would have been a sale

The extreme opposite idea that no unlicensed use of software is a lost sale is likewise a fantasy.

Obviously there's some mix of the two, but given then I've seen AI used (poorly) for both TV commercials (in expensive time slots) and billboards (I think expensive as well, but I don't really know) where you know they can afford to pay "real people" to do it, there's definitely a noticeable amount of real replacement.

YouTube has the lowest quality ads of any online platform I use by several orders of magnitude. AI being used for belly fat and erectile dysfunction ads is not exactly good for its creative reputation

Local governments in BR have already made ads using generative AI that were shown during prime time TV hours[1].

You can argue that is a bad thing (local designers/content producers/actors/etc lost revenue, while the money was sent to $BigTech) or that this was a good thing (lower cost to make ad means taxpayer money saved, paying $BigTech has lower chance of corruption vs hiring local marketing firm - which is very common here).

[1]https://www.cnnbrasil.com.br/tecnologia/video-feito-com-inte...

I have no doubt there will be AI advertising. I bet it’s the primary untapped revenue stream. My argument is that it will be associated with cheap, untrustworthy products over time, even if it’s possible to spend more money and get better AI ads. Same thing as social/search ads.

There's a difference between taking one thing and putting something else in it's spot, and truly REPLACING something. Yes, some ads have AI generated actors. You know because you can tell because they're "not quite right", rather than focusing on the message of the ad. Noticing AI in ads turns more people off than on, so AI ads are treated by a lot of people as an easy "avoid this company" signal. So those AI ads are in lieu of real actors, but not actually replacing them because people don't want to watch AI actors in an ad. The ad ceases to be effective. The "replacement" failed.

Realistic video generation only became a thing in the last year or so.

How long do you suppose it will be before we can't tell the difference between it and reality anymore? A few years at the most. Then what?

I don't think AI will ever be able to compete with real actors, not in a meaningful way.

Animated films have competed for box office dollars since basically the dawn of cinema. Animated characters have fan followings.

Just wait; the stuff is coming. Ultra-realistic full-length feature films with compelling AI characters that are not consistent from beginning to end, but appear in multiple features.

The public will swallow it up.

Animation is drawn by humans, not AI. That's why it sells, it still has heart and emotion in it.

And as people get more used to the patterns of AI it’s getting called out more and more.

“ Everyday, I see ads on YouTube with smooth-talking, real-looking AI-generated actors. Each one represents one less person that would have been paid.”

The thing is that they would not have paid for the actor anyway. It’s that having an “actor” and special effects for your ads cost nothing, so why not?

The quality of their ads went up, the money changing hands did not change.

> Generative AI is replacing writers, designers, actors, ... it is nothing like just a spell checker or Phtoshop.

For cheap stuff it’s true. However, nobody wants to watch or listen generated content and this will wear thin aside from the niche it’ll take hold of and permanently replace humans.

> Each one represents one less person that would have been paid

or equally, one more advert which (let's say rightly) wouldn't have been made.

seriously though, automation allows us to do things that would not have been possible or affordable before. some of these are good things.

Anecdata: I know writers, editors, and white collar non-tech workers of all kinds who use AI daily and like it.

When GPT3.5 first landed a lifelong writer/editor saw a steep decrease in jobs. A year later the jobs changed to "can you edit this AI generated text to sound human", and now they continue to work doing normal editing for human or human-ish writing while declining the slop-correction deluge because it is terrible work.

I can't help but see the software analogy for this.

I'm not a "real coder" either, but it sounds like the "No True Scotsman" trap when people say, “AI can’t be a real coder,” and then redefine “real coder” to mean something AI can’t currently do (like full autonomy or deep architectural reasoning). This makes the claim unfalsifiable and ignores the fact that AI already performs several coding tasks effectively. Yeah, I get it, context handling, long-horizon planning, and intent inference all stink, but the tools are all 'real' to me.

That's based on the assumption models would not soon cross that treshold of autonomy and self-reflection that suddenly makes an escalating number of jobs (with cheap humanoids, even physical) automatable for ridiculous pricing. Even if this isn't certain, likelihood could be considered quite high and thus we urgently need a public debate / design process for the peaceful, post-commercial, post-competitive, open-access post-scarcity economy some (RBE / commoning community) have been sketching for years and years. Seems this development defies most people's sense of imagination - and that's precisely why we need to raise public awareness for the freedom and fun OPEN SOURCE EVERYTHING & Universal Basic Services could bring to our tormented world. 2 billion without access to clean water? we can do much better if we break free from our collective fixation on money as the only means and way to deal with things ever.

You say it as a joke, but spell check has replaced certain tiers of editors. And Photoshop has replaced certain tiers of designers.

Not a joke.

Proofreaders still exist, despite spell checker. Art assistants still exist, despite Photoshop. There's always more work to do, you just incorporate the new tools and bump the productivity, until it gets so commoditized it stops being a competitive advantage.

Saying AI "replaces" anyone is just a matter of rhetoric to justify lower salaries, as always.

Bad ones

Bad by modern standards, there was a point in time where even just compositing two images on top of each other with an alpha cutout was considered a complex task.

> all this talk will sound as dumb as stating "MS Word spell checker will replace writers" or "Photoshop will replace designers".

You cannot use just a spell checker to write a book (no matter how bad) or photoshop (non-AI) plugins to automatically create meaningful artwork, replacing human intervention.

Business people "up the ladder" are already threatening with reducing the workforce and firing people because they can (allegedly) be replaced by AI. No writer was ever threatened by a spellchecker.

Hollywood studio execs are putting pressure on writers, and now they can leverage AI as yet another tool against them.

People are stupid, always have been - took thousands of years to accept brain as the seat of thought because “heart beat faster when excited, means heart is source of excitement”.

Heck, people literally used to think eyes are the source of light since everything is dark when you close them.

People are immensely, incredibly, unimaginably stupid. It has taken a lot of miracles put together to get us where we are now…but the fundamentals of what we are haven’t changed.

You're confusing ignorance with stupidity. People at the time were coming to the best conclusions they could based on the evidence they had. That isn't stupid. If humans were truly "incredibly, unimaginably stupid" we wouldn't have even gotten to the point of creating agriculture, much less splitting the atom. We didn't get here through "miracles," we got here through hard work and intelligence.

Stupid is people in 2025 believing the world is flat and germ theory is a hoax. Ignorance becomes stupidity when our species stands on the shoulders of giants but some people simply refuse to open their eyes.

Ignorance is when you don’t know something. Stupidity is when you think you know something and are presented with evidence to the contrary, but you dismiss it because of something stupid (i.e., irrational).

Of course, all these words have some overlap. My larger point is, people rarely come to rational conclusions organically, and it takes decades to centuries for even the most empirically verifiable idea to permeate, especially in the face of misinformation campaigns or when against “common sense”.

> took thousands of years to accept brain as the seat of thought because “heart beat faster when excited, means heart is source of excitement”

So what you are saying is that beings without a central nervous system cannot experience "excitement"?

or perhaps the meaning of too many words has changed, and their context. When Hippocrates claimed that the brain was an organ to cool the blood, perhaps he meant that we use our thought to temper our emotions, i.e. what he said agrees with our modern understanding.

However, many people read Hippocrates and laugh at him, because they think he meant the brain was some kind of radiator.

Maybe because we stopped talking about "excitable" people as being "hot-blooded"

>or perhaps the meaning of too many words has changed, and their context. When Hippocrates claimed that the brain was an organ to cool the blood, perhaps he meant that we use our thought to temper our emotions, i.e. what he said agrees with our modern understanding.

The belief that the heart was the seat of thought and emotion was shared by numerous cultures[0], and was based on their naive interpretation of physiology and biology and cannot be dismissed as a modern misinterpretation of a single vague aphorism by a single person due to the preponderance of documentary evidence to the contrary from contemporary sources. Also, you're probably talking about Aristotle, not Hippocrates.

>Maybe because we stopped talking about "excitable" people as being "hot-blooded"

Also people still say "hot blooded" all the time.

[0]https://en.wikipedia.org/wiki/Cardiocentric_hypothesis

In a few years AI will have progressed a fair bit in a way that MS spell checker didn't.

> tired of anthropomorphization

The thing is trained on heaps and heaps of human output. You better anthropomorphize if you want to stay ahead of the curve.

I'm tired of all the "yet another tool" reductionism. It reeks of cope.

It took under a decade to get AI to this stage - where it can build small scripts and tiny services entirely on its own. I see no fundamental limitations that would prevent further improvements. I see no reason why it would stop at human level of performance either.

There’s this saying that humans are terrible at predicting exponential growth. I believe we need another saying, those who expect exponential growth have a tough time not expecting it.

It’s not under a decade for ai to get to this stage but multiple decades of work, with algorithms finally able to take advantage of gpu hardware to massively excel.

There’s already feeling that growth has slowed, I’m not seeing the rise in performance at coding tasks that I saw over the past few years. I see no fundamental improvements that would suggest exponential growth or human level of performance.

I'm not sure if there will be exponential growth, but I also don't believe that it's entirely necessary. Some automation-relevant performance metrics, like "task-completion time horizon", appear to increase exponentially - but do they have to?

All you really need is for performance to keep increasing steadily at a good rate.

If the exponential growth tops out, and AI only gains a linear two days per year of "task-completion time horizon" once it does? It'll be able to complete a small scrum sprint autonomously by year 2035. Edging more and more into the "seasoned professional developer" territory with each passing year, little by little.

> ... entirely on its own

ok, ok! just like you can find for much less computation power involved using a search engine, forums/websites having if not your question, something similar or a snippet [0] helping you solve your doubt... all of that free of tokens and companies profiting over what the internet have build! even FOSS generative AI can give billions USD to GPU manufacturers

[0] just a silly script that can lead a bunch of logic: https://stackoverflow.com/questions/70058132/how-do-i-make-a...

You can’t see any bottlenecks? Energy? Compute power? Model limitations? Data? Money?

there are more of all these bottlenecks among the proprietary or open source project worlds, which have yet to collaborate amongst themselves to unify the patterns in their disparate codebases and algorithms into a monolith designed to compress representations of repeated structures edited for free by a growing userbase of millions and the maturing market of programmers who grew up with cheap GPUs and reliable optimization libraries

the article's subtitle is currently false, people collaborate more with the works of others through these systems and it would be extremely difficult to incentivize any equally signifciant number of the enterprise software shops, numerics labs, etc to share code: even joint ventures like Accenture do not scrape all their own private repos and report their patterns back to Microsoft every time they re-implement the same .NET systems over and over

So maybe the truth is somewhere in between - there is no way AI is not going to have a major societal impact - like social media.

If we don't see some serious fencing, I will not be surprised by some spectacular AI-caused failures in the next 3 years that wipe out companies.

Business typically follows a risk-based approach to things, and in this case entire industries are yolo'ing.

> I see no fundamental limitations

How about the fact that AI is only trained to complete text and literally has no "mind" within which to conceive or reason about concepts? Fundamentally, it is only trained to sound like a human.

The simplest system that acts entirely like a human is a human.

An LLM base model isn't trained for abstract thinking, but it still ends up developing abstract thinking internally - because that's the easiest way for it to mimic the breadth and depth of the training data. All LLMs operate in abstracts, using the same manner of informal reasoning as humans do. Even the mistakes they make are amusingly humanlike.

There's no part of an LLM that's called a "mind", but it has a "forward pass", which is quite similar in function. An LLM reasons in small slices - elevating its input text to a highly abstract representation, and then reducing it back down to a token prediction logit, one token at a time.

It doesn’t develop any thinking, it’s just predicting tokens based on a statistical model.

This has been demonstrated so many times.

They don’t make mistakes. It doesn’t make any sense to claim they do because their goal is simply to produce a statistically likely output. Whether or not that output is correct outside of their universe is not relevant.

What you’re doing is anthropomorphizing them and then trying to explain your observations in that context. The problem is that doesn’t make any sense.

When you reach into a "statistical model" and find that it has generalized abstracts like "deceptive behavior", or "code error"? Abstracts that you can intentionally activate or deactivate - making an AI act as if 3+5 would return a code error, or as if dividing by zero wouldn't? That's abstract thinking.

Those are real examples of the kind of thing that can be found in modern production grade AIs. Not "anthropomorphizing" means not understanding how modern AI operates at all.

I don't think you have any idea what you're talking about at all.

You've clearly read a lot of social media content about AI, but have you ever read any philosophy?

Almost all philosophy is incredibly worthless in general, and especially in application to AI tech.

Anything that actually works and is in any way useful is removed from philosophy and gets its own field. So philosophy is left as, largely, a collection of curios and failures.

Also, I would advise you to never discuss philosophy with an LLM. It might be a legitimate cognitohazard.

How exactly do you presume to make an argument about thought and whether or an LLM exhibits genuine thought and intelligence without philosophy?

Not to mention the effect of formal logic in computer science

By comparing measurable performance metrics and examining what little we know of the internal representations.

If you don't have anything measurable, you don't have anything at all. And philosophy doesn't deal in measurables.

How do you know what is, isn't, could be, or couldn't be measurable?

You're not being serious.

> The simplest system that acts entirely like a human is a human.

LLM's do not act entirely like a human. If they did, we'd be celebrating AGI!

They merely act sort of like a human. Which is entirely expected - given that the datasets they're trained on only capture some facets of human behavior.

Don't expect them to show mastery of spatial reasoning or agentic behavior or physical dexterity out of the box.

They still capture enough humanlike behavior to yield the most general AI systems ever built.

We see massive initial growth followed by a slowdown constantly.

There is zero reason to think AI is some exception that will continue to exponentially improve without limit. We already seem to be at the point of diminishing returns. Sinking absurd amounts of money and resources to train models that show incremental improvements.

To get this far they have had to spend hundreds of billions and have used up the majority of the data they have access to. We are at the point of trying to train AI on generated data and hoping that it doesn’t just cause the entire thing degrade.

your comment reeks of hype. no evidence whatsoever for your prediction, just an assertion that you personally don't see it not coming true

It took closer to 100 years for AI to get to this stage. Check out: https://en.wikipedia.org/wiki/History_of_artificial_intellig...

I suspect once you have studied how we actually got to where we are today, you might see why your lack of seeing any limitations may not be the flex you think it is.

> I see no fundamental limitations that would prevent further improvements

How can you say this when progress has so clearly stagnated already? The past year has been nothing but marginal improvements at best, culminating in GPT-5 which can barely be considered an upgrade over 4o in terms of pure intelligence despite the significant connotation attached to the number.

Marginal improvements? Were you living under a rock for the past year?

Even o1 was a major, groundbreaking upgrade over 4o. RLVR with CoT reasoning opened up an entire new dimension of performance scaling. And o1 is, in turn, already obsoleted - first by o3, and then by GPT-5.

When are you starting time from? AI has been a topic of research for over 70 years

>> It reeks of cope.

haha, well said, I've got to remember that one. HN is a smelly place when it comes to AI coping.

I’ve seen comments here claiming that this site is either a bunch of coders coping about the limitations of AI and how it can’t take their job, or a bunch of startup dummies totally on the AI hype train.

Now, there’s a little room between the two—maybe the site is full of coders on a cope train, hoping that we’ll be empowered by nice little tools rather than totally replaced. And, ya know, multiple posters with multiple opinions, some contradictions are expected.

But I do find it pretty funny to see the multiple posters here describe the site they are using as suffering from multiple, contradictory, glaringly obvious blindspots.