It's a very long post with a mix of technical (math) and philosophical sections. Here are the most striking points to reflect upon IMHO.

> It seems to me that training beginning PhD students to do research [...] has just got harder, since one obvious way to help somebody get started is to give them a problem that looks as though it might be a relatively gentle one. If LLMs are at the point where they can solve “gentle problems”, then that is no longer an option. The lower bound for contributing to mathematics will now be to prove something that LLMs can’t prove, rather than simply to prove something that nobody has proved up to now and that at least somebody finds interesting.

Training must start from the basics though. Of course everybody's training in math starts with summing small integers, which calculators have been doing without any mistake since a long time.

The point is perhaps confirmed by another comment further down in the post

> by solving hard problems you get an insight into the problem-solving process itself, at least in your area of expertise, in a way that you simply don’t if all you do is read other people’s solutions. One consequence of this is that people who have themselves solved difficult problems are likely to be significantly better at using solving problems with the help of AI, just as very good coders are better at vibe coding than not such good coders

People pay coders to build stuff that they will use to make money and I can happily use an AI to deliver faster and keep being hired. I'm not sure if there is a similar point with math. Again from the post

> suppose that a mathematician solved a major problem by having a long exchange with an LLM in which the mathematician played a useful guiding role but the LLM did all the technical work and had the main ideas. Would we regard that as a major achievement of the mathematician? I don’t think we would.

> by solving hard problems you get an insight into the problem-solving process itself, at least in your area of expertise, in a way that you simply don’t if all you do is read other people’s solutions. One consequence of this is that people who have themselves solved difficult problems are likely to be significantly better at using solving problems with the help of AI, just as very good coders are better at vibe coding than not such good coders

Yes but it's not just that if you solved a problem yourself, you're better at solving other problems; it's also that you actually understand the problem that you solved, much better than if you simply read a proof made by somebody (or something) else.

I see this happening in the enterprise. People delegate work to some LLM; work isn't always bad, sometimes it's even acceptable. But it's not their work, and as a result, the author doesn't know or understand it better than anyone else! They don't own it, they can't explain it. They literally have no value whatsoever; they're a passthrough; they're invisible.

Are you a cutting edge research scientist or something? Everyone I know works in the same domain every day. The problems are the same. People aren't solving brand new problems to humanity every day. We make budgets and look at ticket counts. Roll out patches. Replace hardware. Upgrade software packages. Make a new dashboard to track a project. I guess if every day is a completely novel thing for you, ok. I feel like the goalposts have moved to an absolutely ridiculous place. Oh no, I won't have a bunch of random error log numbers memorized anymore? Who gives a shit. I just want to afford a place to live so I can play my guitar and make something good for dinner. Maybe I'm just old, but I don't see why the average person needs to be a fuckin genius problem solver.

I think that’s fine, but 1) that mentality leaves you extremely vulnerable to being disrupted by LLMs and 2) IMO, if you are solving the same problems every day it means you are not making progress on solving the root causes of those problems. What you are describing is toil, not knowledge work

I don't think it matters much what kind of problem it is. If it is challenging enough to benefit from assistance and you end up playing a minor role in the solution, it seems like you are putting yourself in the worst position possible. You lose your edge for functioning within the problem space and it raises the question why you are even in the loop at all. If its job security you want, transforming your role into LLM babysitter seems like the worst way to ensure it.

so how would an LLM being able to do your job help you afford a place to live

> Would we regard that as a major achievement of the mathematician? I don’t think we would.

1. Does it matter, really? 2. Is it very different from previous computer-aided proofs, philosophically?

It matters because most mathematicians thrive on the recognition of their achievements. If what you do any mediocre mathematician could have done, that takes away motivation and fulfillment.

1. It matters because there are human mathematicians who pride themselves for their mathematical achievements. Mathematics is art to them.

2. Yes, it is. Because pre-LLM era computer-aided proofs were about using the computer to either solve a large number of cases or to check that each step in a proof mechanically follows from the axioms.

I feel like you slightly miss both points.

> Training must start from the basics though.

Sure, but the point is that at some point (e.g. when starting a PhD) one needs to do research, not learn the basics. And LLMs make that harder, because they solve the "easy research" part.

Take a young lion "fighting/playing" with another young lion as a way to learn how to fight, and later hunt. And suddenly they get TikTok and are not interested in playing anymore. Their first encounter with hunting will be a lot harder, won't it?

> People pay coders to build stuff that they will use to make money and I can happily use an AI to deliver faster and keep being hired.

Again, that's true but missing the point: if you never get to be a "good coder", you will always be a "bad vibe coder". Maybe you can make money out of it, but the point was about becoming good.

But perhaps we should regard it as a major achievement.

I mean in the same way getting Wolfram Alpha to solve a really hard/ugly differential equation I suppose

Insane that we have a system capable of making innovative math proofs and people dismiss it as unimpressive

The creation of the system is deeply impressive, so are compilers but I don't raise a toast to it each time I build my code. Like generated art, people aren't going appreciate it on the same level.

Wow you consider this on the same level of impressiveness as a compiler?

I actually consider compilers more impressive, and a compiler was an important part of making this possible.

To each their own. I mean compilers didn’t produce trillions of dollars of investment, and produce serious and profound philosophical questions about the nature of consciousness but you’re right, thank god we have C

Compilers just made it all possible, but they are not new and shiny. LLMs did not produce the philosophical questions, but they do raise them. It's worth noting that computers have been changing the way we think about consciousness long before LLMs, largely thanks to compilers.