I'm tired of all the "yet another tool" reductionism. It reeks of cope.

It took under a decade to get AI to this stage - where it can build small scripts and tiny services entirely on its own. I see no fundamental limitations that would prevent further improvements. I see no reason why it would stop at human level of performance either.

There’s this saying that humans are terrible at predicting exponential growth. I believe we need another saying, those who expect exponential growth have a tough time not expecting it.

It’s not under a decade for ai to get to this stage but multiple decades of work, with algorithms finally able to take advantage of gpu hardware to massively excel.

There’s already feeling that growth has slowed, I’m not seeing the rise in performance at coding tasks that I saw over the past few years. I see no fundamental improvements that would suggest exponential growth or human level of performance.

I'm not sure if there will be exponential growth, but I also don't believe that it's entirely necessary. Some automation-relevant performance metrics, like "task-completion time horizon", appear to increase exponentially - but do they have to?

All you really need is for performance to keep increasing steadily at a good rate.

If the exponential growth tops out, and AI only gains a linear two days per year of "task-completion time horizon" once it does? It'll be able to complete a small scrum sprint autonomously by year 2035. Edging more and more into the "seasoned professional developer" territory with each passing year, little by little.

> ... entirely on its own

ok, ok! just like you can find for much less computation power involved using a search engine, forums/websites having if not your question, something similar or a snippet [0] helping you solve your doubt... all of that free of tokens and companies profiting over what the internet have build! even FOSS generative AI can give billions USD to GPU manufacturers

[0] just a silly script that can lead a bunch of logic: https://stackoverflow.com/questions/70058132/how-do-i-make-a...

You can’t see any bottlenecks? Energy? Compute power? Model limitations? Data? Money?

there are more of all these bottlenecks among the proprietary or open source project worlds, which have yet to collaborate amongst themselves to unify the patterns in their disparate codebases and algorithms into a monolith designed to compress representations of repeated structures edited for free by a growing userbase of millions and the maturing market of programmers who grew up with cheap GPUs and reliable optimization libraries

the article's subtitle is currently false, people collaborate more with the works of others through these systems and it would be extremely difficult to incentivize any equally signifciant number of the enterprise software shops, numerics labs, etc to share code: even joint ventures like Accenture do not scrape all their own private repos and report their patterns back to Microsoft every time they re-implement the same .NET systems over and over

So maybe the truth is somewhere in between - there is no way AI is not going to have a major societal impact - like social media.

If we don't see some serious fencing, I will not be surprised by some spectacular AI-caused failures in the next 3 years that wipe out companies.

Business typically follows a risk-based approach to things, and in this case entire industries are yolo'ing.

> I see no fundamental limitations

How about the fact that AI is only trained to complete text and literally has no "mind" within which to conceive or reason about concepts? Fundamentally, it is only trained to sound like a human.

The simplest system that acts entirely like a human is a human.

An LLM base model isn't trained for abstract thinking, but it still ends up developing abstract thinking internally - because that's the easiest way for it to mimic the breadth and depth of the training data. All LLMs operate in abstracts, using the same manner of informal reasoning as humans do. Even the mistakes they make are amusingly humanlike.

There's no part of an LLM that's called a "mind", but it has a "forward pass", which is quite similar in function. An LLM reasons in small slices - elevating its input text to a highly abstract representation, and then reducing it back down to a token prediction logit, one token at a time.

It doesn’t develop any thinking, it’s just predicting tokens based on a statistical model.

This has been demonstrated so many times.

They don’t make mistakes. It doesn’t make any sense to claim they do because their goal is simply to produce a statistically likely output. Whether or not that output is correct outside of their universe is not relevant.

What you’re doing is anthropomorphizing them and then trying to explain your observations in that context. The problem is that doesn’t make any sense.

When you reach into a "statistical model" and find that it has generalized abstracts like "deceptive behavior", or "code error"? Abstracts that you can intentionally activate or deactivate - making an AI act as if 3+5 would return a code error, or as if dividing by zero wouldn't? That's abstract thinking.

Those are real examples of the kind of thing that can be found in modern production grade AIs. Not "anthropomorphizing" means not understanding how modern AI operates at all.

I don't think you have any idea what you're talking about at all.

You've clearly read a lot of social media content about AI, but have you ever read any philosophy?

Almost all philosophy is incredibly worthless in general, and especially in application to AI tech.

Anything that actually works and is in any way useful is removed from philosophy and gets its own field. So philosophy is left as, largely, a collection of curios and failures.

Also, I would advise you to never discuss philosophy with an LLM. It might be a legitimate cognitohazard.

How exactly do you presume to make an argument about thought and whether or an LLM exhibits genuine thought and intelligence without philosophy?

Not to mention the effect of formal logic in computer science

By comparing measurable performance metrics and examining what little we know of the internal representations.

If you don't have anything measurable, you don't have anything at all. And philosophy doesn't deal in measurables.

How do you know what is, isn't, could be, or couldn't be measurable?

You're not being serious.

> The simplest system that acts entirely like a human is a human.

LLM's do not act entirely like a human. If they did, we'd be celebrating AGI!

They merely act sort of like a human. Which is entirely expected - given that the datasets they're trained on only capture some facets of human behavior.

Don't expect them to show mastery of spatial reasoning or agentic behavior or physical dexterity out of the box.

They still capture enough humanlike behavior to yield the most general AI systems ever built.

We see massive initial growth followed by a slowdown constantly.

There is zero reason to think AI is some exception that will continue to exponentially improve without limit. We already seem to be at the point of diminishing returns. Sinking absurd amounts of money and resources to train models that show incremental improvements.

To get this far they have had to spend hundreds of billions and have used up the majority of the data they have access to. We are at the point of trying to train AI on generated data and hoping that it doesn’t just cause the entire thing degrade.

your comment reeks of hype. no evidence whatsoever for your prediction, just an assertion that you personally don't see it not coming true

It took closer to 100 years for AI to get to this stage. Check out: https://en.wikipedia.org/wiki/History_of_artificial_intellig...

I suspect once you have studied how we actually got to where we are today, you might see why your lack of seeing any limitations may not be the flex you think it is.

> I see no fundamental limitations that would prevent further improvements

How can you say this when progress has so clearly stagnated already? The past year has been nothing but marginal improvements at best, culminating in GPT-5 which can barely be considered an upgrade over 4o in terms of pure intelligence despite the significant connotation attached to the number.

Marginal improvements? Were you living under a rock for the past year?

Even o1 was a major, groundbreaking upgrade over 4o. RLVR with CoT reasoning opened up an entire new dimension of performance scaling. And o1 is, in turn, already obsoleted - first by o3, and then by GPT-5.

When are you starting time from? AI has been a topic of research for over 70 years

>> It reeks of cope.

haha, well said, I've got to remember that one. HN is a smelly place when it comes to AI coping.

I’ve seen comments here claiming that this site is either a bunch of coders coping about the limitations of AI and how it can’t take their job, or a bunch of startup dummies totally on the AI hype train.

Now, there’s a little room between the two—maybe the site is full of coders on a cope train, hoping that we’ll be empowered by nice little tools rather than totally replaced. And, ya know, multiple posters with multiple opinions, some contradictions are expected.

But I do find it pretty funny to see the multiple posters here describe the site they are using as suffering from multiple, contradictory, glaringly obvious blindspots.