High interest rates + tariff terror -> less investment -> less jobs

But let's blame AI

Let's read the paper instead: https://digitaleconomy.stanford.edu/wp-content/uploads/2025/...

It presents a difference-in-differences (https://en.wikipedia.org/wiki/Difference_in_differences) design that exploits staggered adoption of generative AI to estimate the causal effect on productivity. It compares headcount over time by age group across several occupations, showing significant differentials across age groups.

Page 3: "We test for a class of such confounders by controlling for firm-time effects in an event study regression, absorbing aggregate firm shocks that impact all workers at a firm regardless of AI exposure. For workers aged 22-25, we find a 12 log-point decline in relative employment for the most AI-exposed quintiles compared to the least exposed quintile, a large and statistically significant effect."

I appreciate the link to differences in differences, I didn't know what to call this method.

The OP's point could still be valid: it’s still possible that macro factors like inflation, interest rates, or tariffs land harder on the exact group they label ‘AI-exposed.’ That makes the attribution messy.

Those fixed effects are estimated separately for each age group, controlling for that.

pg. 19, "We run this regression separately for each age group."

Interesting technique, that DID. But it assumes the non treatment factors would affect both the treatment group and control group equally, that the effect would scale linearly. If the treatment group was more exposed to the non-treatment factors, then an increase could account for a larger difference than the one seem at time 1. Idk which other industry they used as the controll group but interest rates could have a superlinear effect on tech as compared to on that, so the difference of difference would be explained by the non-treatment factor too

Were entry level jobs the first to go in earlier developer downturns?

Is AI being used to attempt to mitigate that effect?

I don't think their methods or any statistical method could decouple a perfectly correlated signal.

Without AI, would junior jobs have grown as quickly as other?

I'm not trying to be clever here. I'm trying to be publicly stupid in an effort to understand.

You really do have to account for why this is mainly happening in industries that are adopting AI, why it's almost exclusively impacting entry-level positions (with senior positions steady or growing), and why controlling for broad economic conditions failed to correct this. I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.

My personal theory is that the stock market rewards the behavior of cutting jobs as a signal of the company being on the AI bandwagon. Doesn't matter if the roles were needed or not. Line goes up, therefore it is good.

This is a complete reversal in the past where having a high headcount was an easy signal of a company's growth (i.e. more people, means more people building features, means more growth).

Investors are lazy. They see one line go down, they make the other line go up.

CEOs are lazy. They see line go up when other line goes down. So they make other line go down.

(I am aware that "line go up" is a stupid meme. But I think it's a perfect way to describe what's happening. It is stupid, lazy, absurd, memetic. It's the only thing that matters, stripped off of anything that is incidental. Line must go up.)

Juniors become seniors.

If we replace all juniors with AI, in a few years there won't be skilled talent for senior positions.

AI assistance is a lot different than AI running the company. Making expensive decisions. While it could progress, bear in mind that some seniors continue to move up in the ranks. Will AI eventually be the CEO?

We all dislike how some CEOs behave, but will AI really value life at all? CEOs have to have some place to live, after all.

The AI will at least be cheaper than a CEO, it might also be more competent and more ethical. The argument against making a Large Language Model the CEO seems to mostly be about protecting the feelings of the existing CEO, maybe the Board should look past these "feelings" and be bold ?

I'll re-explain.

A human CEO might do morally questionable things. All do not, of course, but some may.

Yet even so, they need a planet with air, water, and some way to survive. They also may what their kids to survive.

An AI may not care.

It could be taking "bad CEO" behaviour to a whole new level.

And even if the AI had human ethics, humans play "us vs them" games all the time. You don't get much more "them" than an entirely different lifeform.

The AI most certainly does not care, because it is a computer program. It also doesn't want to buy a boat.

It also doesn’t care if the company goes bankrupt tomorrow without paying out their bonus.

Nah, the insistence that humans are somehow uniquely too smart to destroy themselves is obviously laughable, it's troubling that you wrote that down without bursting into laughter because it is so silly.

Ah, the classic cynical brooding response.

First, we're discussing what an AI might do, with terms like "no air". EG, wholesale destruction.

So please do show when the human race has destroyed itself entirely. Oh wait, you're here to read this? I guess that has never, ever, ever happened. Ever. Because if one human exists, there's never been a case where humans were wiped out, with for example no air.

So the "obvious" aspect is not quite so clear. There's no evidence of it, merely conjecture.

Second, at no point did I say smart or not smart.

Instead, I discussed two viewpoints. The viewpoint of an AI, which does not require air, and the viewpoint of a human, which does care about air.

Get the difference?

You may want to dive into global warming, or pollution, or what not. These however, are longer term issues. Destruction today is far different than destruction in 100 years. Or 1000. For example, if global warming predictions are accurate, or even worse, there will still be humans somewhere for hundreds of years, without much change.

Some people might starve, the equator may be too hot, people will die, but there will be places to live still. The planet may go runaway CO2, but that partially abates as more humans die. Fewer humans, less CO2.

Yet either way, it's long term, and no one can definitively say the outcome.

Long term isn't typically part of most decision trees. There's a reason for that. When thinking long-term, you have to think of all the possible permutations, all the possible things that could happen, all the possible things that occur, and those greatly and massively expand with time.

Any thinking being which considers all of their actions right now, in the moment, would become almost immediately paralyzed if it had to consider those actions extremely long-term. Each move, each action, with massive pause and hours/days/weeks of thought. Do you consider how your next step will impact people 4,000 years in the future? A million? A billion?

What about eating toast for breakfast? How much consideration does the average entity put, into consuming food for energy, and yet looking forward on their actions for a billion years?

Beyond that, there is no accurate data for future outcomes, to make a proper determination of what may happen 500, a thousand, a million, a billion years in the future. So all of these computational chains are realistically for naught. After all, the sun may expand sooner than predicted. Certainly the moon will likely move further from Earth, and the Earth's spin will slow down. How will now, affect everything in that future?

You may say, why don't we consider our actions, you know, just in the next hundred years? But now, suddenly, are not you considering your actions in too short of a time frame? Should you not consider what the human race and what the Earth itself will be like in a billion years? If you're not considering those things, then are you not depriving entities and beings and living organisms a healthy planet one billion years in the future?

Where does it stop? Where does it begin? Where and how far in the future should you consider your actions, on a day-to-day basis?

More importantly, how much of what you do in a day should you consider with regards to the future? Which acts of yours are detrimental to the future? Do you know? Are you sure? Do you have any idea?

Obviously, some of the thoughts above are somewhat long term. Yet not thinking long term is why we got into this issue with global warming! And truthfully, if the complaint is that we're destroying the future planet for species that live here besides ourselves, then we really should be considering 10k, 50k, a million years in the future.

Anything else is only selfishly considering our own personal descendants for a couple of generations.

But let's take a step back. I'm not trying to say that we or anybody else can make these kinds of in-depth long scope decisions, nor am I saying that we should not care about global warming. Obviously we should. We actually know it's a problem now. We knew in the 70s.

Instead, what I'm saying is that individuals are individuals and excessively considering long-term ramifications of all of your actions can be highly detrimental to the capacity to make any decision whatsoever. Imagine an AI, which every single time it made a decision, every single time it even decided to compute something, every time it decided to take an action in the real world, it had to consider the ramifications one billion years hence.

Imagine the amount of processing power that would require. Now imagine the amount of energy or "food" needed. This is why beings cannot sit around for all eternity while a wolf leaps upon them, while they ponder all the possible future ramifications of even the tiniest or even medium-scale acts. And this is why the average human being does not consider the ramifications. It's an evolutionary requirement.

And as I've suggested above, it is also going to be a requirement for AI. Certainly it can consider some of its acts, much like a human being can consider some of its acts, but that's not how things work on a day-to-day basis.

Human beings solve this by observation after the fact of many of our acts, and secondarily, by reviewing what's happening in the environment and the world around us as we make change, and then determining if we should scale back or roll back what we're doing.

The same will be true of AI. If the same is not true of AI, AI will cause global warming merely by trying to stop it.

The sheer computational power required for an AI and all the AIs that are making decisions, optimally choosing best for 1M years in the future? It would eat enormous amounts of energy, this making global warming worse, whilst trying to make it better!

Whether or not we should be putting more energy into considering these things doesn't mean that it's possible for the average thinking entity to do so. Imagine the amount of processing power that would require. Now imagine the amount of energy or food. This is why beings cannot sit around for all eternity while wolf leaps upon them while they ponder all the possible future ramifications of even the tiniest or even medium-scale acts. And this is why the average human being does not consider the ramifications. It's an evolutionary requirement.

Given the timeline this is more likely a reversion to the mean following the end of zero interest rate policy.

Software development is one of the listed industries. Well before AI we have seen that few companies wanted entry level devs due to the training and such.

Reducing in call centers has been going on for a while as more people use automated solutions (not necessarily AI) and many of the growing companies make it hard to reach a real person anyways (Amazon, Facebook, etc). I feel like AI is throwing fuel on the existing fire, but isn't as much of a driver as the headlines suggest.

The jobs are going to India

American workers are truly under attack from all sides. H1B. Outsourcing. What's left? The blue collar manufacturing is mostly gone. White collar work well on its way out. Why is our own government (by the people for the people) actively assisting in destroying American's ability to get jobs (H1B)? Especially in these conditions. I'm no racist or idiot but it's unacceptable. I didn't expect the gov to actively be conspiring with big corps to make my economic position weaker. Unbelievable breach of trust. We need to demand change from our government.

Your government no longer works for you, it works for a small group of billionaires. Does it make sense then?

This

It’s an unpopular opinion in the current environment but it’s the program that allows international talent to connect with local capital that creates all the jobs in tech.

Nearly half the unicorns in the country were found by foreigners living in the country. https://gfmag.com/capital-raising-corporate-finance/us-unico...

The biggest problem right now is that there is no distinction between companies replacing Americans labor with cheap labor and entrepreneurial talent that creates jobs. Everyone is on the same visa.

Efficiency rules all.

It just doesn't make sense to pay someone $10 when you can pay someone else $2

And when we're all out of work except for the doctors and nurses, electricians and plumbers, there will be nobody to contribute to consumer spending. And we will suffer, at the hand of the government that assisted in this scam.

If the predictions of someone are correct, AI is going to hit everyone, including the doctors, nurses, and plumbers and electricians

Maybe they will give us subsistence UBI

2 dollars and a 12 hour time difference with a full day between messages in conversations.

They will come back (eventually).

Having to work with ESL contractors from firms like Cognizant or HCL is true pain. Normally it would be like 3-4 US employees working on something and then its like 20-30 ESL outsourced people working on something. The quality is so poor though its not worth it.

My current org nuked their contract w HCL after 2 years because how shitty they are and now everything is back onshore. Millions wasted lol. Corporations are so silly sometimes.

They also need 5 people to do the work of one us worker. And then another US worker to guide and do some qa on the output they produce . I don't see how it saves money. There are other countries with lower wages than the US where this doesn't happen such as Poland or Australia.

They are. Companies also do this and then wonder why they get blackmailed for terabytes of leaked proprietary data on the darkweb.

Saving money on wages isn't the only consideration.

[dead]

> You really do have to account for why this is mainly happening in industries that are adopting AI

Correlation is not causation. The original research paper does not prove a connection.

> I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.

They are nonetheless subject to publish or perish pressure and have strong incentives to draw publishable attention-grabbing results even where the data is inconclusive.

> I doubt very much that these three Stanford professors would be blindsided by the concept of rates and tarriffs.

Digital Economy Lab and the Stanford Institute for Human-Centered Artificial Intelligence

I fully expect that these professors would be blindsided by even the most rudimentary real world economics.

Tariffs are just a massive government revenue generating consumption tax on particular industries. We would expect unemployment among the young trying to enter those industries to be hit hardest.

Do you understand that American employers don't have to pay American tariffs?

i'm curious who you think pays american tarrifs

You first

Everyone pays mate

> Do you understand that American employers don't have to pay American tariffs?

Except they do, if their raw materials, tools, etc., are imported.

More investment -> more return on investment -> "AI is increasing worker efficiency" -> This is good for AI.

Less investment -> more layoffs -> "AI is replacing workers" -> This is good for AI.

A computer does something good -> "That's AI" -> This is good for AI.

A computer does something bad -> "It needs more AI" -> This is good for AI.

It seems more true than the "this is good for bitcoin" meme now that bitcoin seems to track the dollar very closely

Is there some central authority that’s telling people to blame this all on AI, or how is everyone reaching this conclusion and ignoring the other obvious factors you stated?

It is in their interest to find explanations for reductions in labor that don't assign the blame to corporate greed.

For example, a call center might use the excuse of AI to fire a bunch of people. They would have liked to just arbitrarily fire people a few years ago, but if they did that people would notice the reduction in quality and perhaps realize it was done out of self-serving greed (executives get bigger bonuses / look better, etc). The AI excuse means that their service might be worse, perhaps inexcusably so, but no one is going to scrutinize it that closely because there is a palatable justification for why it was done.

This is certainly the type of effect I feel like underlies every story of AI firing I've heard about.

How is firing a bunch of people because you made a machine that you believe can do their jobs not textbook corporate greed? It seems like the worst impulses of Taylorism made manifest?

This is worse: this is just pretending like the machine does their jobs because it benefits them.

The big (biggest? ) problem of modernity is that quality is decorrelated from profit. There's a lot more money in having the optics of doing a good job than in actually doing it; the economy is so abstracted and distributed that the mechanism of competition to punish bad behavior, shitty customer service, low standards, crappy work, fraud... is very weak. There is too much information asymmetry, and the timescale of information propagation is too long to have much of an effect. As long as no one notices what you're fucking up very quickly you can get away with it for a long time.

Seems even worse to me. At least in the 'competition' paradigm there's a mechanism for things getting better for consumers. No such thing here.

> It is in their interest to find explanations for reductions in labor that don't assign the blame to corporate greed.

Exactly.

It doesnt need to be a conspiracy. Incentives allign sometimes. Alot of people are invested in AI replacing jobs and it would be nice for them if the buzz was that it is actually the case

Blaming AI is better because it helps corporations convince the working class that there jobs are in long-term danger so they collectively settle for less favorable work terms and compensation, unlike if they are convinced that it is going to gradually improve with the upcoming monetary easing cycle..

I'm sorry, have you read the paper, or did you just want to recite those here?

Here's the study:

https://digitaleconomy.stanford.edu/publications/canaries-in...

It looks like they're looking at data for the last few years, not just the last few months.

I haven't read it, and maybe you can disagree with their opinions, but there does appear to be a slow down in college graduates recently.

End of ZIRP and the Sec. 179 change for engineering salaries probably explains more of this (plus the increase in outsourcing). I’m sure some decision makers also threw AI into the mix but the financials of hiring software engineers in the US was already challenging before AI “took everyone’s job”.

Since this article is about AI, and since this comment seems rather low effort compared to the Stanford study, I went ahead and used low effort to analyze the report compare it to this comment. Here's my low effort AI response:

> Prompt: Attached is a paper. Below is an argument made against it. Is there anything in the paper that addresses the argument?: High interest rates + tariff terror -> less investment -> less jobs

> High rates/firm shocks: They add firm–time fixed effects that absorb broad firm shocks (like interest-rate changes), and the within-firm drop for 22–25-year-olds in AI-exposed roles remains.

> “Less investment” story: They note the 2022 §174 R&D amortization change and show the pattern persists even after excluding computer occupations and information-sector firms.

> Other non-AI explanations: The decline shows up in both teleworkable and non-teleworkable jobs and isn’t explained by pandemic-era education issues.

> Tariffs: Tariffs aren’t analyzed directly; broad tariff impacts would be soaked up by the firm–time controls, but a tariff-specific, task-level channel isn’t separately tested.

Fitting, since it came up with unrelated information (the R&D tax thing) and the 3rd bullet point. Also started talking about tariffs as if it had addressed them, then notes that it doesn't address them.

I generally agree that AI is the scapegoat, but not for those same reasons. Despite the lack of job growth and the tariffs, recent data shows the economy grew about 3%. Even if it's not AI as the primary driver, efficiency seems to have increased.

[deleted]

How does that make sense? Wouldn’t high interest rates and tariffs cause more expensive engineers to have disproportionate opportunity? I remember during 2008 it was much easier for my employer to justify junior engineers than senior ones.

Less investment? You must be trolling. I encourage you to look at the about of stupid money that has been “invested” into LLMs.

Do you consider things to be that single-faceted, that other factors cannot realistically be a part of the equation?

I have to admit that something is "single-faceted" would be a nice break from hearing that something is "complex and multifaceted".

(High interest rates + tariff terror -> less investment -> less jobs) + AI

I was here in the 90s dotcom boom and interest rates were higher than today.

> But let's blame AI

The thing whose exact purpose is to replace labor? Must be a conspiracy going on to suggest its linked to reducing labor. Bias! Agenda!

The jobs are going overseas

Well, you do have CEOs out there saying it...

2 things can be true

But usually one is more true. I'm on the camp of high interests and high tariffs being the cause more.