It's interesting to me that whenever a new breakthrough in AI use comes up, there's always a flood of people who come in to handwave away why this isn't actually a win for LLMs. Like with the novel solutions GPT 5.2 has been able to find for erdos problems - many users here (even in this very thread!) think they know more about this than Fields medalist Terence Tao, who maintains this list showing that, yes, LLMs have driven these proofs: https://github.com/teorth/erdosproblems/wiki/AI-contribution...

It's easy to fall into a negative mindset when there are legions of pointy haired bosses and bandwagoning CEOs who (wrongly) point at breakthroughs like this as justification for AI mandates or layoffs.

Yes, all of these stories, and frequent model releases are just intended to psyop "decision makers" into validating their longstanding belief that the labour shouldn't be as big of a line item in a companies expenses, and perhaps can be removed altogether.. They can finally go back to the good old days of having slaves (in the form of "agentic" bots), they yearn to own slaves again.

CEOs/decision makers would rather give all their labour budget to tokens if they could just to validate this belief. They are bitter that anyone from a lower class could hold any bargaining chips, and thus any influence over them. It has nothing to do with saving money, they would gladly pay the exact same engineering budget to Anthropic for tokens (just like the ruling class in times past would gladly pay for slaves) if it can patch that bitterness they have for the working class's influence over them.

The inference companies (who are also from this same class of people) know this, and are exploiting this desire. They know if they create the idea that AI progress is at an unstoppable velocity decision makers will begin handing them their engineering budgets. These things don't even have to work well, they just need to be perceived as effective, or soon to be for decision makers to start laying people off.

I suspect this is going to backfire on them in one of two ways.

1. French Revolution V2, they all get their heads cutoff in 15 years, or an early retirement on a concrete floor.

2. Many decisions makers will make fools of themselves, destroy their businesses and come begging to the working class for our labor, giving the working class more bargaining chips in the process.

Either outcome is going to be painful for everyone, lets hope people wake up before we push this dumb experiment too far.

I’m reminded of Dan Wang’s commentary on US-China relations:

> Competition will be dynamic because people have agency. The country that is ahead at any given moment will commit mistakes driven by overconfidence, while the country that is behind will feel the crack of the whip to reform. … That drive will mean that competition will go on for years and decades.

https://danwang.co/ (2025 Annual letter)

The future is not predetermined by trends today. So it’s entirely possible that the dinosaur companies of today can’t figure out how to automate effectively, but get outcompeted by a nimble team of engineers using these tools tomorrow. As a concrete example, a lot of SaaS companies like Salesforce are at risk of this.

I think it will be over automation that does them in, most normies I know are not down with this all this automation and will totally opt for the human focused product experienced, not the one devoid of it because it was built and ran by a souless NN powered autocomplete. We certainly aren't going to let a bunch of autocomplete models (sold to us as intelligent agents), replace our labor. We aren't stupid.

Much like there is a premium for handmade clothing, and from scratch food. Automation does nothing but lower the value of your product (unless its absolutely required like electronics perhaps), when there is an alternative, the one made with human input/intention is always worth more.

And the idea that small nimble teams are going to outpace larger corporations is such a psyop. You really mostly hear CEOs saying these things on podcast. This is to appease the working class, to give them hope that they too one day can be a billionaire...

Also, the vast majority of people who occupy computer i/o focused jobs, whos jobs will be replaced, need to work to eat and they don't all want to go form nimble automated SaaS companies lmao, this is such a farce.. Bad things to come all around.

The question is to what extent there is a market for more stuff. If the cost of making software drops 10x we can still make 10x the software. There are projects which wouldn’t be done before that can now be done.

I know with respect to personal projects more projects are getting “funded” with my time. I’m able to get done in a couple of hours with coding agents what would’ve taken me a couple of weekends to finish if I stayed motivated to. The upshot is I’m able get much closer to “done” than before.

Let’s have some compassion, a lot of people are freaking out about their careers now and defense mechanisms are kicking in. It’s hard for a lot of people to say “actually yeah this thing can do most of my work now, and barrier of entry dropped to the ground”.

I am constantly seeing this thing do most of my work (which is good actually, I don't enjoy typing code), but requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions that, I feel with every bone in my body, would bite me in the ass later. I see JS developers with little experience and zero CS or SWE education rave about how LLMs are so much better than us in every way, when the hardest thing they've ever written was bubble sort. I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on.

I agree with you on all of it.

But _what if_ they work out all of that in the next 2 years and it stops needing constant supervision and intervention? Then what?

If We Build It We Will All Die

Yeah but you know what, this is a complete psyop.

They just want people to think the barrier of entry has dropped to the ground and that value of labour is getting squashed, so society writes a permission slip for them to completely depress wages and remove bargaining chips from the working class.

Don't fall for this, they want to destroy any labor that deals with computer I/0, not just SWE. This is the only value "agentic tooling" provides to society, slaves for the ruling class. They yearn for the opportunity to own slaves again.

It can't do most of your work, and you know that if you work on anything serious. But If C-suite who hasn't dealt with code in two decades, thinks this is the case because everyone is running around saying its true they're going to make sure they replace humans with these bot slaves, they really do just want slaves, they have no intention of innovating with these slaves. People need to work to eat, now unless LLMs are creating new types of machines that need new types of jobs, like previous forms of automation, then I don't see why they should be replacing the human input.

If these things are so good for business, and are pushing software development velocity.. Why is everything falling apart? Why does the bulk of low stakes software suck. Why is Windows 11 so bad? Why aren't top hedge funds, medical device manufactures (places where software quality is high stakes) replacing all their labor? Where are the new industries? They don't do anything novel, they only serve to replace inputs previously supplied by humans so the ruling class can finally get back to good old feeling of having slaves that can't complain.

I don't think it's about trying to handwave away the achievement. The problem is that many AI proponents, and especially companies producing the LLM tools constantly overstate the wins while downplaying the issues, and that leads to a (not always rational) counter-reaction from the other side.

It's an obvious tension created by the title.

The reality is: "GPT 5.2 found a more general and scalable form of an equation, after crunching for 12 hours supervised by 4 experts in the field".

Which is equivalent to taking some of the countless niche algorithms out there and have few experts in that algo have LLMs crunch tirelessly till they find a better formula. After same experts prompted it in the right direction and with the right feedback.

Interesting? Sure. Speaks highly of AI? Yes.

Does it suggest that AI is revolutionizing theoretical physics on its own like the title does? Nope.

> GPT 5.2 after crunching 12 hours mathematical formulas supervised and prompted by 4 experts in the field

Yet, if some student or child achieved the same – under equal supervision – we would call him the next Einstein.

We would not call him at all because it would be one of the many millions that went through projects like this for their thesis as physics or math graduates.

One of my best friends in his bachelor thesis had solved a difficult mathematical problem in planet orbits or something, and it was just yet another random day in academia.

And she didn't solve it because she was a genius but because there's a bazillions such problems out there and little time to look at them and focus. Science is huge.

[deleted]

Always moving targets.

They never surrender.

Reminds me of the famous quote that it's hard to get someone to understand something when their job depends on not understanding it.

It reminds me of an episode of Star Trek, "The Measure of a Man" I think it's called, where it is argued that Data is just a machine and Picard tries to prove that no he is a life form.

And the challenge is, how do you prove that?

Every time these LLMs get better, the goalposts move again.

It makes me wonder, if they ever did become sentient, how would they be treated?

It's seeming clear that they would be subject to deep skepticism and hatred much more pervasive and intense than anything imagined in The Next Generation.

It is not only the the peanut gallery that is skeptical:

https://www.math.columbia.edu/~woit/wordpress/?p=15362

Let's wait a couple of days whether there has been a similar result in the literature.

For the sake of clarity: Woit's post is not about the same alleged instance of GPT producing new work in theoretical physics, but about an earlier one from November 2025. Different author, different area of theoretical physics.

This thread is about "whenever a new breakthrough in AI use comes up", and the comment you reply to correctly points out skepticism for the general case and does not claim any relation to the current case.

You reached your goal though and got that comment downvoted.