Coding is one topic but the big one is agentic ai.

You will have an agent like your seo expert, this agent will be able to use common tools like google seo, facebook seo etc. and you will teach how you want it to do its 'job'.

You will have a way of delivering your requirements to it, it will run in the background, might ask for feedback but will otherwise do stuff similiar to whatever person was doing it before.

There might be some transition phase like verifing the data of the real person vs. the agentic ai then moving over to only validation until the agentic agent is in avg as good as a human. Then the human will be gone.

Agentic will take basic support tasks (its actually already doing this) first, then more complicated things etc.

For this we need an ecosystem aka the agentic ai platform, interconnect between agent and tools and this stuff is currently getting build by someone one way or the other.

On scale we need more capacity and these agents will also cost more money than a 20$ subscription.

But if you have a, lets say SAP agent, it will be build once, trained once and than used by everyone. Instead of a person using a HR system or billing system, the agent will bridge the gap between data and system.

This is a pipe dream, models are mistake machines and agents are mistake amplifiers.

This only "works" for toy projects, things that don't really matter and nothing that can cost you business, money, clients or time.

There are two ways and all of it in context of billions of dollars the richest companies in the world are investing with the smartest people working on it.

These big companies want to see whats going to happen with more parameters and are already quite deep in it. So that momentum will push us easily through 2026 and nothing will collapse in 2027 just because.

So we will see if we hit a hard plateu or not, i do not see any plateu at all. I see constant progress on every single front, faster models, faster inferencing, etc.

I also see the biggest reinforcement loop we ever have created by a small amount of companies getting real human feedback every single day by saying things like thanks, thumbs up/down, or "thats wrong", "i meant x not y" etc.

And there are plenty of ways of doing a transition from 100% human to 100% agents. Like feedback loops, human in the loop, human approval for critical steps, etc.

And i think we will continue spending time and energy working at these problems in the future and no longer on just reimplementing the same crud application.

I see where you are going with this, but IMO this is not a technical problem but a legal problem.

Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends?

Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility.

You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth.

An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it.

I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with.

I think it will move most critical due dilligence to the tools / HR system themselves.

Encoding more rules, more precise rules and alerting a human in case it thinks its off. Like salary increase by 20% gets flagged automatically. Revenue drop bey x % too.

It could even go so far that the maker of these systems will insure you for their use.

It just needs to be cheaper than all the humans in the loop and if you train it once, you can copy it unlimited time. Scaling effect of software for tasks we need to train a human again and again.

It could also be agent systems which do this. Like a company building and designing the HR USA Healthcare agent specialized in SAP HR. Another one for HR Brazil Healthcare agent specialized in another HR software.

Humans are really expensive and you have to train them regularly and every single on of them.