There is no shortage of articles about AI replacing entry level jobs.

With reductions in workforce numbers, when will they start replacing managers with AI? What is the point of "leadership" when the workers are AI-bots?

Based on my experiences, I doubt that many of the managers are going to be competent prompt engineers.

I think you might not get serious responses, but I think this is an interesting question since I think AI is a better fit for management than many other jobs.

Nobody would have believed it 10 years ago, but today AI is more likely to replace a concept artist than an accountant, so it's not beyond imagination to replace a manager even if the ICs are still human.

AI excels at summarization, which is a big part of the job for a lot of managers. They gather information, go to meetings, write reports, and generally re-share information appropriate for whatever audience.

At a lot of companies, the lowest level managers don't make a lot of decisions either. Tech leads make technical decisions, PMs make product decisions, and the skip-levels (e.g. Directors, VPs) make staffing decisions.

In practice, I don't think humans will report to AIs, but hierarchies might flatten (e.g. ICs report to Directors) and responsibilities might get shuffled around (e.g. some duties get assigned to HR).

If the workers are AI-bots, then I don't really see any skill overlap with management. If you manage only AIs, you are an IC, not management.

Whole point of deep hierarchies is to diffuse responsibility. Either risky behavior goes down, or someone in the smaller hierarchy takes on larger risk relative to a similar person in that role at an earlier point in history. It would be my hope that risky behavior decreases because I'm not quite seeing the benefits of GDP going up forever and ever.

If a company wants to stay in business, the legal risk of AI bots firing people is probably not worth the cost savings. Until that changes, I don't think there's much to discuss, but that may not be long given the way things are going.

Don't know about HR AI-bots firing workers. But only last week there was a news segment about HR AI-bots doing the first level interviewing via video-calls. Which is intriguing since there have been some reports previously about job applicants using AI to enhance their interviews, especially for remote jobs. Could lead to an AI-vs-AI showdown.

Of course, I take such reports with a grain of salt, because I often wonder whether such news items are self-serving product promotions in disguise.

They don't have to admit that the decision was made by AI. One low-wage worker in HR can be the spokesperson who sells it as his decision.

So the thing about people who enforce laws is, they're not _completely_ stupid. Dodgy companies will pretty much always attempt to hide the fact that they're breaking the law; nothing new there.

No law specifically requires more than a coin flip, but firing people because some rando in HR "decided" without justification or documentation of cause can be risky.

> Nobody would have believed it 10 years ago, but today AI is more likely to replace a concept artist than an accountant

Do you really think so? I understand the basic sentiment of your statement but having tried to use AI for concept art, I was very disappointed at its lack of originality. Especially in an inevitably oversaturated market of AI creative work, I see the value of good human conceptual artists only rising.

Sure, it's a terrible concept artist and you shouldn't use it as one. But if you use it as an _accountant_, you're likely to end up in legal trouble, so, all in all, it is probably still a better concept artist than an accountant.

Large corporations are full of middle-managers who do not lead anything nor produce anything (customer facing) and whose only job is to "facilitate information flow" and "pull teams together" etc. I think these roles will be replaced with AI chatbots quickly. The upside (for the company) ought to be that now the facilitation should happen very quickly and is measurable/manageable more directly. The downside (for the line employees) is that now your boss is a chatbot.

How do we know that it's these types of managers who will be replaced, instead of the ones doing the actual work?

Perhaps some companies do deeper analysis first - with AI summarizing & clustering signals from various internal systems and corporate email history. I'm not saying that this is better for everyone compared to current human-driven approaches, but the human driven approaches seem to be just about stack-ranking employees by their recent performance review ratings and drawing a red line in a spreadsheet. Amazon's recent "flattening the organization" initiative might well be using AI as one of the signals. I have no idea whether that's actually true - but then again it's 2025 and they have been a data-driven company for a long time.

The easiest job to replace must be the CEO.

Look at Musk. He's CEO of six companies (or so), yet has time to run DOGE and constantly post on X.

Also, a CEO defines which way the company will go and makes questionable decisions like opting to build a Cybertruck. I don't see AIs doing these type of decisions, which are many, for now.

The most fundamental duty of a CEO is to accept responsibility away from the board members, hence the large bonuses when they are let go. By definition, AI cannot do this. End of conversation.

Same goes for managers in most cases. Firing people because an AI said to simply won't hold ip in court, at least for now.

> accept responsibility

When was the last time a CEO went to jail because of illegal activities committed by the company? There is no responsibility.

If the "responsibility" is "you become rich, and if you get fired we give you a huge bonus on top", then I'm pretty sure anyone would be happy to take it.

Being a CEO is like being a politician. You need to convince others that they need you even if they don't, or you're incompetent, or you serve other interests. It's not what it takes to "lead a company", it's what it takes to "get the highly-paid job".

"some of you might die, but it's a risk I'm willing to take" - CEO handbook quote (only slightly paraphrased). This is all the risk there is for a CEO, change my mind.

I mentioned this to someone the other day, and they rebuffed that part of what you get with a CEO is their network and their ability to network. AI won't replace that, at least not anytime soon.

A recent WSJ article pointed out that 50% of board members are ex-CEOs.

I think CEO networking is code for cartels & collusion.

I think it is the layers of muddle management that could easily be replaced by AI.

I think this is true. There was an article on HN the other day about how Moderna has already re-organized its leadership structure around AI.

It makes sense to me that AI could conceivably already be as good at making the hard, data-based decisions that CEOs make, and that, therefore, they could one day be replaced by AI. Meanwhile, you've got the soft skills part of being an executive, which humans are better at (as long as the people they deal with are also humans). So, you could split that CEO role into two parts, each specializing in half of what a CEO today does. Both roles would probably do a better job than the median CEO today, and get paid less overall.

But that "not anytime soon" part is the only thing I disagree with. Because I just don't know how long the timeline is for stuff like that. It can change pretty fast.

Will they really get paid less? The feeling I have now is that people are paid a lot not because of what they do, but because of the potential damage they can do in case they fucked up. E.g. CEOs, lawyers, etc. Moving some of the work to AI doesn't reduce the risk, so they should have the same pay in my mental model.

Plus C-level executives typically don't lower their pay, and IMO investors apparently don't care that much about their pay, I can't see a reason why their pay will be reduced (significantly).

Why can't the board network while having the AI generate the big ideas for the last few workers?

CEO's typically make decisions when there isn't enough data to support the obvious direction. In this way, they are anti-pattern finders that rely on gut or some anecdotal experience. They are most often wrong, but when they are right and it's a success they are considered genius. I'm not sure an AI can make a non-obvious decision based on feeling.

> I'm not sure an AI can make a non-obvious decision based on feeling.

You just described that CEOs are like broken clocks: they are mostly wrong, but sometimes they are right by chance.

How do you conclude that AIs can't do that? If it's about eloquently phrasing a random idea, AIs are perfect.

Did you find any way to generate good ideas with AI? Anecdotally, I found it incredibly unsatisfactory in this sense. It normally re-hashes old ideas without any internal coherence, like those novelty websites combining two existing startups to create a random mission (e.g. “AirBnB for motorbikes”).

When you consider that LLMs are trained on what can be scrapped from the web, the so called "creativity" comes from mashing together less commonly co-associated ideas.

Just train it on linkedin (ew)

> The easiest job to replace must be the CEO.

So why didn't Warren Buffet replace himself as CEO with an AI, but instead he chose a human?

Anyone born in Buffets generation and are not rich wasn’t even getting out of bed.

A proper assessment of their skill relative to the ground truth they lived would be nice. One cannot simply walk into an office and rub elbows now. And the other half of the gender, and minorities make up a much larger part of the work force

New Deal bootstrapping then Reaganomics putting thumbs on the scales for those generations too.

His biggest asset was J&J when government was spending tons on health propaganda and grooming cause Americans used to be a bunch of greasy slobs. Oh look comb and toothbrush and mouthwash sales are staples buy buy buy then inflate through media propaganda and tax policy.

He was not a wizard.

Probably because not all CEOs are the same, like anything people do. There's good and involved ones and other where we wonder what they're really doing.

Because AI is bullshit

Exactly.

I expect it will take a while.

Nobody really wants to decrease the number of humans in their fiefdom, right?

However, if AI actually works out and produces tools that make people, like, 5x more effective, than a software company can replace an existing one at 1/5 the cost with 1/5 the engineers. Fewer people to manage, less deep corporate tree, and maybe some of those middle layers will also use AI…

But nobody wants to decrease the size of their fiefdom, so that company will need to be built from the ground up and then wipe out the competition.

I think we need to frame this question in terms of systems, not just roles. Even in this thread, people have different definitions of manager: someone responsible for people, someone who makes decisions, someone who routes info.

So the type of management will be a big factor

Buy before Al replaces "managers," companies will (or should) rethink how their systems and workflows operate, then realign roles to match.

Instead of starting with a question of replacing roles (and some certainly will), it'll start with redefining how work gets done, and updating job descriptions accordingly.

What won't change is that employers will hire for value. So maybe while some companies would rather substitute managers with AI, I imagine many would prefer the outsized value an ai-literate manager might have

No one is being replaced 1:1, but everyone is going to be downsizing (and that’s mostly the same thing).

You need a lot fewer managers if your team is 5-20% what it needed to be a few years ago.

I think this is correct. Downsizing has happened many times and is a natural cycle of the business world.

The hype around AI is simply the grifters opportunistically inserting themselves and clueless investors wanting to stop potential bleeding.

Yes but so far many of our technical achievements have allowed us to work more valuably, raising the standard of living for many people. There is basically no plan (in the US, at least) to replace such large portions of the workforce humanely. Are we all going to end up shoveling coal again to generate power to feed the AI?

More likely we end up just producing more shit with the same number of workers.

IME most managers became managers because they preferred working with and helping organize people to creative contributions. I don’t see the need for any of these functions going away. I see productivity differences. Maybe the roles evolve.

I think the real question is how do we best harness the increased productivity? Logically speaking, if each person is 5x as productive because of AI there should be an equally greater capacity to get things done. Businesses aren’t just running out of work to do, right?

Most managers are managers because they like having power and control. I can count the number of managers I have met on less than one hand who you can tell is a manager because they like helping and organizing people.

Well you just met another. And I can count on one hand the number of peers who I've worked with who just want power.

It would be nice if that was true but in my experience the vast majority got promoted for their success one level down and just found themselves lumped with those responsibilities.

To replace the mangers, the AI needs to take care of the following pain points (got some help from chatgpt): Hiring and Talent Acquisition: - Candidate pipeline quality is inconsistent. - Time-consuming resume screening and interview scheduling. - Lack of diverse candidate pools.

Performance Management - Biased or inconsistent performance reviews. - Goal-setting lacks clarity or alignment with org OKRs. - Lack of real-time performance insights.

Project Planning and Execution - Estimations are often inaccurate. - Project scope creep due to unclear requirements. - Dependencies across teams delay execution.

Technical Debt and Code Quality - Mounting technical debt slows velocity. - Inconsistent coding standards. - Hard to trace ownership for legacy code.

Team Collaboration and Communication - Cross-team communication breakdowns. - Time zones complicate decision-making. - Meeting overload or lack of clarity post-meeting.

Onboarding and Knowledge Transfer - New hires take too long to ramp up. - Tribal knowledge isn't well documented. - Onboarding processes are inconsistent.

Incident Management and Reliability - Blameless postmortems are rarely actionable. - Alert fatigue from noisy signals. - Root cause analysis (RCA) is time-consuming.

Career Growth and Mentorship - Lack of clarity in career ladders. - Mentorship is ad-hoc and inconsistent. - Managers don’t have enough time for coaching.

Engineering Productivity Metrics - Metrics often feel punitive or misused. - Hard to attribute impact to engineers’ work. - Lack of actionable insights from engineering data.

Cross-Functional Alignment - Product and engineering priorities are misaligned. - Specs often change mid-cycle. - Lack of visibility into roadmap tradeoffs.

Each of these categories will probably need an AI agent in itself and probably an AI agent to control all the other AI agents. It would be a complex system, but will still need some monitoring until it is self-sustaining.

A large proportion of that list is "busy work" and not directly related to producing quality deliverables.

The industrial age management practices disregard the intelligence levels of professional levels of ICs.

Virtually all of these are things that humans are bad at, so to me it seems AI should easily replace management. After all the list itself was generated by AI and not even a human :)

> What is the point of "leadership" when the workers are AI-bots?

This is a weird question. If the team below a manager is replaced by AI, then quite obviously there is nothing else to manage. The real question is: can AI replace the teams?

Then of course, if there is a reduction in workforce, there may be a reduction of the number of teams and hence of the number of managers for those teams.

> competent prompt engineers

Writing prompts is not engineering.

Replacing managers with AI was the premise of Manna: https://marshallbrain.com/manna1

Line managers - when the majority of their teams is AI-based

Middle managers - when the line managers are gone

Senior leadership - when the middle management is gone (IF they are willing to give up their seats)

I hope soon. I hate managing people. If a bot can do all the HR busywork that would be fantastic.

Just now I'm recirculating for the 2nd time a quote for a laptop that admin sent back to me because it had a tiny detail wrong, and then by the time I did the 2nd submission the quote expired. And this is about 1% of the beauracracy to get a new employee started in my org.

I think the answer is obvious that nobody is being replaced by AI.

The real concern should be that telling entry level workers they need to be prompt engineering experts on top of everything else is stupid. We're only making it harder to hire the right people.

We should be focusing on whether someone can get the job done regardless of what strategies they prefer to research a solution.

A manager has responsibility for the team, how can an AI be responsible?

A developer has responsibility for their code. How can an AI be responsible?

A writer has responsibility for their writing. How can an AI be responsible?

AI doesn't need to be responsible. It just needs to provide value, just like writers, developers, managers, etc.

That’s why no one replaced developers with AI that can push its own code to prod.

What does responsibility mean? How does it translate to actual work and skills?

One of the most useful functions of a good manager is to act as a shield for their team from upper management firestorms. That's a role that I think AI is particularly unsuited for, given their tendency to be obsequious.

As a more general role, the idea of responsibility is that the manager has the job of making sure that individual employees' tasks are suited both to their individual competence and abilities and to the corporation's deliverables and ultimate bottom line. This requires making arguments in both directions: in pulling employees to working on things more useful to the company, and in changing the deliverables to capitalize on employees' abilities.

As I get it (and I’m not a smart or well-educated person, so please correct me if I’m wrong or misinformed or talking nonsense), responsibility is status that arises from the recognition of one’s moral obligations. Companies are at least partially formed around shared moral obligations, or put simply - goals. At least without those they become meaningless economic machines, and our culture tends to favor things “having” a meaning.

With multiple agents (of any nature) feedback is essential for the work to be done well - that’s my understanding of how it translates to the actual work getting done.

Leadership is the one doing the "replacing". They won't replace themselves. They'll replace you first.

Also there's tons of science validating how the most unqualified and unfit people make it to leadership positions. If you're a leader, most likely you're not a good one. So it's not like the industry knows a good leader when they see it. So if AI is a better manager, the industry doesn't care. It's politics and ass kissing that gets them up there.

[deleted]

I'm still trying to decide what I'm gonna do the first time a company tries to have their AI Bot interview me for a job. Will I accept that offer or reject the interview? I'm undecided. I'll probably refuse.

However I think an AI would maybe be more honest, about how well I do and my super high skill level perhaps, and not simply reject people for being over 50, White, Male, Overqualified, and good looking, as is [or dare I say was] the custom with DEI hiring practices, for the last decade...or two.

This entire post is exhibit A on why managers are essential and will likely not be replaced by AI any time soon.

Managers will be replaced by AI when people are willing to work for them. Does anyone really want that? Who the fuck wants to work for an unaccountable machine who you can’t grab a beer with after work? How much life are you willing to abstract away in the name of “efficiency”? To what end?

The exact same thing applies if you are a manager, do you really want a flock of loyal electric sheep to do your bidding? If you’re in management for control over others, how is that satisfying? If you’re in it to mentor, who comes after you? How?

Why does anyone want this? Our societies are already so mechanized and automated yet somehow we have less time than the average medieval peasant to enjoy our allegedly easier lives. What toil has been eliminated thus far?

Yeah, I'm still on the fence about how I feel about this. I mean LLMs are going to be potentially more fair, and there wouldn't be office politics and people being treated unfairly by an emotional easily-manipulated human boss.

Then again the office politics might even get WORSE when people try to trick an AI Boss into blaming someone else for a problem they created themselves. Then again the AI will have a superhuman knowledge of who checked in bad code that broke the product, etc. Lots and lots of trade-offs.

Managers are the parasitic class, so they will take the company down with them, and will never be replaced. One can however start a new firm without them and without AI. All you need is to measure commits and communications to know if someone is working and how they should be organized. It doesn't even require AI.

Who measures them and how do they judge quality?

User activity and commit activity matter. If there is no user telemetry, something is not right. If users are filing too many bugs, something is very wrong. If users are filing feature requests, that's a good sign.

hello,

as always: imho. ...

idk. ... what do you mean by "managers" in your question!?

in my view: the "real" task of mangers - regardless of the level, but even more if they are at a lower/mid level - is managing peoples & their expectations - either of their "team" or their superior.

and looking at the current state of "AI", i don't see much gain in using it to manage those part of "management".

but i think (current) "AI" would be a good source of "additional" decision/reasoning over the lets call it "technical parts" of mgmt ...

sure, this will change in the future, but currently i don't see much on the horizon regarding the "peoples" part of mgmt.

but in the medium/long term, i could imagine a development somewhat similar to the following:

looking at the progression of neo-liberal capitalism: using / blaming AI for unfavorable (mgmt)decisions may be a good possibility to "hide" behind said AI to enforce such "unpopular" decision.

they would have been made anyways, but using this pattern "nobody" is responsible for such developments, because "AI said so" etc...

just my 0.02€

In my experience with medium to global enterprises, the vast majority of management are paper shuffling, fiefdom building incompetents that through their incessant demands of meetings and reports are actually a hindrance to delivering quality products and services on time and budget.

I think we can replace congress members with AI who simply take prompts of their constituents en masse and vote for bills in favor of their desires.

Well, we are almost half-way through the year since this prediction [0] and the full assault on knowledge workers is underway (as expected) and is replacing them with AI via mass layoffs across the tech industry and beyond and it is the complete endgame of the intelligent age.

You will see this magnified by this year's Google I/O announcements.

Anytime they mention "AGI", it really is the goal of AI replacing humans from their jobs that are economically useful. (Not the benefit of humanity bullshit.)

In 10 - 15 years time, the questions from those who got out of tech would be:

Did you that humans used to program computers?

[0] https://news.ycombinator.com/item?id=42490692