This is the critical bit (paraphrasing):

Humans have worked out the amplitudes for integer n up to n = 6 by hand, obtaining very complicated expressions, which correspond to a “Feynman diagram expansion” whose complexity grows superexponentially in n. But no one has been able to greatly reduce the complexity of these expressions, providing much simpler forms. And from these base cases, no one was then able to spot a pattern and posit a formula valid for all n. GPT did that.

Basically, they used GPT to refactor a formula and then generalize it for all n. Then verified it themselves.

I think this was all already figured out in 1986 though: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.56... see also https://en.wikipedia.org/wiki/MHV_amplitudes

  > I think this was all already figured out in 1986 though
They cite that paper in the third paragraph...

  Naively, the n-gluon scattering amplitude involves order n! terms. Famously, for the special case of MHV (maximally helicity violating) tree amplitudes, Parke and Taylor [11] gave a simple and beautiful, closed-form, single-term expression for all n.
It also seems to be a main talking point.

I think this is a prime example of where it is easy to think something is solved when looking at things from a high level but making an erroneous conclusion due to lack of domain expertise. Classic "Reviewer 2" move. Though I'm not a domain expert and so if there was no novelty over Parke and Taylor I'm pretty sure this will get thrashed in review.

You're right. Parke & Taylor showed the simplest nonzero amplitudes have two minus helicities while one-minus amplitudes vanish (generically). This paper claims that vanishing theorem has a loophole - a new hidden sector exists and one-minus amplitudes are secretly there, but distributional

> simplest nonzero amplitudes have two minus helicities while one-minus amplitudes vanish

Sorry but I just have to point out how this field of maths read like Star Trek technobabble too me.

Where do you think Star Trek got its technobabble from?

Have I got a skill for you!

trekify/SKILL.md: https://github.com/SimHacker/moollm/blob/main/skills/trekify...

So it's a garbage headline, from an AI vendor, trying to increase hype and froth around what they are selling, when in fact the "new result" has been a solved problem for almost 40 years? Am I getting that right?

Be careful, in the strength of your passions, that you don't become a stochastic word generator yourself.

No

you’re not, and you might have a slight reading comprehension problem

.

I feel for you because you kinda got baited into this by the language in the first couple comments. But whatever’s going on in your comment is so emotional that it’s hard to tell what you’re asking for that you haven’t been able to read already, tl;dr proof stuck at n=4 for years is now for arbitrary n

It bears repeating that modern LLMs are incredibly capable, and relentless, at solving problems that have a verification test suite. It seems like this problem did (at least for some finite subset of n)!

This result, by itself, does not generalize to open-ended problems, though, whether in business or in research in general. Discovering the specification to build is often the majority of the battle. LLMs aren't bad at this, per se, but they're nowhere near as reliably groundbreaking as they are on verifiable problems.

Yes, this is where I just cannot imagine completely AI-driven software development of anything novel and complicated without extensive human input. I'm currently working in a space where none of our data models are particularly complex, but the trick is all in defining the rules for how things should work.

Our actual software implementation is usually pretty simple; often writing up the design spec takes significantly longer than building the software, because the software isn't the hard part - the requirements are. I suspect the same folks who are terrible at describing their problems are going to need help from expert folks who are somewhere between SWE, product manager, and interaction designer.

That paper from the 80s (which is cited in the new one) is about "MHV amplitudes" with two negative-helicity gluons, so "double-minus amplitudes". The main significance of this new paper is to point out that "single-minus amplitudes" which had previously been thought to vanish are actually nontrivial. Moreover, GPT-5.2 Pro computed a simple formula for the single-minus amplitudes that is the analogue of the Parke-Taylor formula for the double-minus "MHV" amplitudes.

> But no one has been able to greatly reduce the complexity of these expressions, providing much simpler forms.

Slightly OT, but wasn't this supposed to be largely solved with amplituhedrons?

You should probably email the authors if you think that's true. I highly doubt they didn't do a literature search first though...

You should be more skeptical of marketing releases like this. This is an advertisement.

They also reference Parke and Taylor. Several times...

Don't underestimate the willingness of physicists to skimp on literature review.

After last month’s Erdos problems handling by LLMs at this point everyone writing papers should be aware that literature checks are approximately free, even physicists.

Still pretty awesome though, if you ask me.

I think even “non-intelligent” solver like Mathematica is cool - so hell yes, this is cool.

Big difference between “derives new result” and “reproduces something likely in its training dataset”.

I'm not sure if GPTs ability goes beyond a formal math package's in this regard or its just its just way more convienient to ask ChatGPT rather than using these software.