I'm a mathematician relying heavily on AI as an association engine of massive scope, to organize and expand my thoughts. One doesn't get best results by "testing" AI.
A surfboard is also an amazing tool, but there's more to operating one than telling it which way to go.
Many people want self-driving cars so they can drink in the back seat watching movies. They'll find their jobs replaced by AI, with a poor quality of life because we're a selfish species. In contrast Niki Lauda trusted fellow Formula 1 race car driver James Hunt to race centimeters apart. Some people want AI to help them drive that well. They'll have great jobs as AI evolves.
Gary Kasparov pioneered "freestyle" chess tournaments after his defeat by Big Blue, where the best human players were paired with computers, coining the "centaur" model of human-machine cooperation. This is frequently cited in the finance literature, where it is recognized that AI-guided human judgement can out-perform either humans or machines.
Any math professor knows how to help graduate students confidently complete a PhD thesis, or how to humiliate students in an oral exam. It’s a choice. To accomplish more work than one can complete alone, choose the former. This is the arc of human evolution: we develop tools to enhance our abilities. We meld with an abacus or a slide rule, and it makes us smarter. We learn to anticipate computations, like we’re playing a musical instrument in our heads. Or we pull out a calculator that makes us dumber. The role we see for our tools matters.
Programmers who actually write better code using AI know this. These HN threads are filled with despair over the poor quality of vibe coding. At the same time, Anthropic is successfully coding Claude using Claude.
Centaurs are a transient phenomenon. In chess, the era of centaur supremacy lasted only about a decade before computers alone eclipsed human+computer. The same will be true in every other discipline.
You can surf the wave, but sooner or later, the wave will come crashing down.
Last I heard, which was last year, human + computer still beat either by themselves. You got a link about what's changed?
Why do you nitpick his illustrative example and entirely ignore his substantive one about finance?
They are transient only in those rare domains that can be fully formalized/specified. Like chess. Anything that depends on the messy world of human - world interactions will require humans in the loop for translation and verification purposes.
From a human, to a centaur, to a pegasus, as it were.
How is chess not fully specified?
I'm guessing that they were referring to the depth of the decision tree able to be computed in a given amount of time?
In essence, it used to be (I have not stayed current) that the "AI" was limited on how many moves into the future it could use to determine which move was most optimal.
That limit means that it is impossible to determine all the possible moves and which is guaranteed to lead to a win. (The "best" than can be done is to have a Machine Learning algorithm choose the most likely set of moves that a human would take from the current state, and which of that set would most likely lead to a win.
How transient depends on the problem space. In chess, centaurs were transient. In architecture or CAD, they have been the norm for decades.
I'm highly worried that you are right. But what gives me hope is that people still play chess, I'd argue even more than ever. People still buy paper books and vinyl records. People still appreciated handwritten greeting cards over printed ones, pay extra to listen to live music where the recorded one is free and will likely sound much better. People are willing to pay an order of magnitude more for a sit in a theater for a live play, or pay premium for handmade products over their almost impossible to distinguish knock offs.
Agreed. But I don't think the time scale will be similar.
Chess is relatively simple in comparison, as complex as it is.
On the other hand, chess is not very financially rewarding. IBM put some money into it for marketing briefly, but that’s probably equal to about five minutes of spend from the current crop of LLM companies.
This is beautifully written, thank you for writing it.
Typing out solutions to problems was only part of the job description because there was no other way to code. Now we have a far better way.
> At the same time, Anthropic is successfully coding Claude using Claude.
Is that why everyone keeps complaining about the quality getting worse?
> I'm a mathematician relying heavily on AI as an association engine of massive scope, to organize and expand my thoughts.
Can you share more about your architecture & process? Also a researcher involved in math research (though not strictly speaking a mathematician, but I digress). I've often thought about using AI on my notes, but they are messy and even then I can't quite figure out what to ask: prioritization, connecting ideas, lit search, etc.
I'd love to hear what you do.
What a beautifully articulated take!
That centaurs can outperform humans or AI systems alone is a weaker claim than "these particular AI systems have the required properties to be useful for that". Chess engines consistently produce strong lines, and can play entire games without human assistance: using one does not feel like gambling, even if occasionally you can spot a line it can't. LLMs catastrophically fail at iterated tasks unless they're closely supervised, and using LLMs does feel like gambling. I think you're overgeneralising.
There is definitely a gap in academic tooling, where an "association engine" would be very useful for a variety of fields (and for encouraging cross-pollination of ideas between fields), but I don't think LLMs are anywhere near the frontier of what can be accomplished with a given amount of computing power. I would expect simpler algorithms operating over more explicit ontologies to be much more useful. (The main issue is that people haven't made those yet, whereas people have made LLMs.) That said, there's still a lot of credit due to the unreasonable effectiveness of literature searches: it only usually takes me 10 minutes a day for a couple of days to find the appropriate jargon, at which point I gain access to more papers than I know what to do with. LLM sessions that substitute for literature review tend to take more than 20 minutes: the main advantage is that people actually engage with (addictive, gambling-like) LLMs in a way that they don't with (boring, database-like) literature searches.
I think developing the habit of "I'm at a loose end, so I'll idly type queries into my literature search engine" would produce much better outcomes than developing the habit of "I'm at a loose end, so I'll idly type queries into ChatGPT", and that's despite the state-of-the-art of literature search engines being extremely naïve, compared to what we can accomplish with modern technology.
We're in agreement. I understand how much harder it is to "think with AI"; the last year of my life has been a brutal struggle to figure this out.
I also agree that neural net LLMs are not the inevitable way to implement AI. I'm most intrigued by the theoretical underpinnings of mathematical proof assistants such as Lean 4. Computer scientists understand the word problem for strings as undecidable. The word problem for typed trees with an intrinsic notion of induction is harder, but constructing proofs is finding paths in this tree space. Just as mechanical computers failed in base ten while at the same time Boole had already developed base two logic, I see these efforts merging. Neural nets struggle to simulate recursion; for proof assistants recursion is baked in. Stare at these tree paths and one sees thought at the atomic level, begging to be incorporated into AI. For now the river runs the other way, using AI to find proofs. That river will reverse flow.
Lean 4 is not a theoretically-interesting proof assistant. If you're interested in such things, look into Rocq (which uses CoIC, like Lean, but is more rigorous about it), the HOL logic, Isabelle/HOL's automation suite (though Isabelle proper is fairly mediocre, apart from being the thing everyone's standardised around), Lean-auto (https://arxiv.org/abs/2505.14929), and whatever SAT solvers are state-of-the-art this week. Like the tools for symbolic integration and frequentist statistics, there isn't any magic: the power comes from handling enough uninteresting special-cases that we get broad coverage. (Personally, I think there's still a lot of power being left on the table by using overly-general algorithms: sledgehammer is used to crack a lot of nuts, even when that takes quadratic time or longer.)
While CoIC has recursion "baked in", HOL does not. It turns out that we can treat structural recursion as a derived property, even over coinductively-defined types. We don't even need a notion of ordinals for this! (See https://www.tcs.ifi.lmu.de/staff/jasmin-blanchette/card.pdf and https://matryoshka-project.github.io/pubs/bindings.pdf.)
Lean 2 used HoTT, which was theoretically interesting, but not enough was known about HoTT at the time (in particular, whether it was a constructive logic – I think we have all the pieces for an explicit construction via cubical type theory now, but I don't know that anyone's put the pieces together), so that direction has been mostly abandoned. I think there's useful work to be done in that direction, but with the current state of HoTT pedagogy, I doubt I'd ever be able to keep on top of it enough to contribute; and with Lean 4 taking so much of the funding, I don't think we'll see much work in this direction until HoTT is easier to learn.
I still think you're overgeneralising. What actual thing does your poetic tree / thought / river analogy correspond to?
We have made those in the 80s. Much was learned about why probabilistic stochastic parrots are a far better model.
Those were "let's get experts to manually code every single document according to a schema defined in advance". Nowadays, we have techniques for automatically-generating explicit pseudo-semantic ontology representations from large datasets (see, for example, https://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang... for image classification tasks). Getting a machine learning model to identify field-specific heuristics, map conventions from one field to another, and then constructing an index that allows us to quickly produce a search / proximity metric from an arbitrary specification, was not really possible in the 80s.
"Throw a massive neural network at it" is an extremely inefficient way to get results, and doesn't generalise well – for instance, there's no easy way to get online learning for a transformer model, whereas that capability just falls out of most search engine database systems. (The underlying relational database engines had a lot of work put in to make online CRUD work reliably, but that work has been done now, and we can all build on top of it without a second thought.)
Fair enough.