how can you ask this question with on a post titled "Amateur armed with ChatGPT solves an Erdős problem"???? are you looking for some randomised control trial? omg
how can you ask this question with on a post titled "Amateur armed with ChatGPT solves an Erdős problem"???? are you looking for some randomised control trial? omg
We just look at comments from AI boosters and it is self-evident that no intelligence is being equalized.
https://news.ycombinator.com/item?id=47911291
Idk, going out on a limb and guessing the folks who hang out on erdosproblems.com aren’t run-of-the-mill dumbasses. The prompt, if you look at it, is actually quite clever. Not as clever as the proof. But far from the equalization OP posits.
Why be such an absolutist.
How about I caveat it the way you want:
AI equalizes intelligence in the sense that it closes the gap. Not perfectly, not infinitely, but directionally. The distribution compresses. The floor rises faster than the ceiling, so people who used to be far apart end up operating much closer together.
You can already see it in the Erdős example. The person who wrote that prompt wasn’t some random idiot. It took real cleverness to even set it up that way. But the fact that they could get that far, with assistance, is exactly the point. The distance between “amateur” and “expert” shrinks when the tool fills in large parts of the path.
Now extend that forward. Today it’s one clever person, one problem, one careful interaction. As the tooling improves, that same pattern scales. Better reasoning, better search, better guidance. The amount of lift the tool provides increases, which means the gap continues to narrow.
All the supposed “counterpoints” people bring up are already implied in the claim. “Equalize” here obviously means moving closer to equality. Is it NOT obvious that LLMs don't actually equalize intelligence to a level of 100%? Do I actually need to spell that out? If there was nothing at stake, I wouldn't need to.
But instead people latch onto the most absurd version possible, knock that down, and act like they’ve said something meaningful. It’s the same mindset as that guy demanding a formal paper or citation for an observation you can see unfolding in real time. Not because it’s unclear, but because engaging with the actual claim is uncomfortable. It’s easier to distort it into something extreme and dismiss it than to admit the gap is closing.
Directionally it is correct - an amateur wouldn’t be able to do this without ChatGPT. You can’t expect maximal democratisation
God, do people not read my posts? I wrote this: "It also exposes their ACTUAL intelligence which is to say most of HN is not too smart."
These types of people need citations for the time of day. They don't know how to debate or discuss in abstract terms. Reality freezes over if no scientific papers exist on the topic.