If this article was written a year ago, I would have agreed. But knowing what I know today, I highly doubt that the outcomes of LLM/non-LLM users will be anywhere close to similar.

LLMs are exceptionally good at building prototypes. If the professor needs a month, Bob will be done with the basic prototype of that paper by lunch on the same day, and try out dozens of hypotheses by the end of the day. He will not be chasing some error for two weeks, the LLM will very likely figure it out in matter of minutes, or not make it in the first place. Instructing it to validate intermediate results and to profile along the way can do magic.

The article is correct that Bob will not have understood anything, but if he wants to, he can spend the rest of the year trying to understand what the LLM has built for him, after verifying that the approach actually works in the first couple of weeks already. Even better, he can ask the LLM to train him to do the same if he wishes. Learn why things work the way they do, why something doesn't converge, etc.

Assuming that Bob is willing to do all that, he will progress way faster than Alice. LLMs won't take anything away if you are still willing to take the time to understand what it's actually building and why things are done that way.

5 years from now, Alice will be using LLMs just like Bob, or without a job if she refuses to, because the place will be full of Bobs, with or without understanding.

The problem is in most environments Bob won’t spend the rest of the year figuring out what the LLM did, because bob will be busy promoting the LLM for the next deliverable, and the problem is that if all bob has time for us to prompt LLMs, and not understand, there will be a ceiling to Bob’s potential.

This won’t affect everyone equally. Some Bob’s will nerd out and spend their free time learning, but other Bob’s won’t.

Why would bob only have time to promote llms? Strange strawman. Many uni courses always had a level of you get out what you put in, it’s the same with LLMs.

Why would the university look at the amount of work a student get done, conclude the student can get 12x done because they can do a years work in a month and not make the student do 12x more work?

And it’s not strictly speaking university we’re talking about. The way we understand work is going to fundamentally change. And we’re not going to value the people who use LLMs to get 1x done.

But yes, university was always about how much work you put into it, and LLM’s are going to make that 10x more obvious.

The point is the Bob and Alice comparison is already a straw man, but I do squarely believe it’s the people with the best mental models and not the people who “get AI” who will win the new world. If you’re curious and good at developing mental models, you can learn “AI” in a week. But if you’re curious and good at developing mental models you’ve probably already lapped both Bob and Alice

Honestly I’m not going to review the thread to see if we got our wires crossed at some point, but I agree with your last comment!

Prompt not promote*

Bob will never figure out there is an error in his paper. If someone tells him, the LLM will have trouble to figure it out as well, remember the LLM inserted the error to make it "look right".

Your perspective is cut off. In the real world Bob is supposed to produce outcomes that work. If he moves on into the industry and keeps producing hallucinated, skewed, manipulated nonsense, then he will fall flat instantly. If he manages to survive unnoticed, he will become CEO. The latter rather unlikely.

That's an odd opinion to hold. That's not what real world usage shows is happening.

Why do you even take time to write.

One issue with this is that you remember and learn when you put actual effort. You can read what the LLM teaches you for years, and not reach the same level of understanding than someone who actually struggled trying to design, implement, debug by hand.

I don’t think we have good answers unfortunately. Im very happy to be able to get the exact tools I want for my specific niche and be able to run experiments in no time. But I also see that intellectually I do not engage at the same level. I can justify my reasoning for high level design decisions I tried to get the agent to follow, but if there is an issue or I need to justify an implementation decision that’s way messier. I had that experience a few times over the past year and every time I have to reverse engineer what the agent might have been doing before I can answer, or realizing I completely misunderstood a specific protocol because I didn’t have to actively engage with it

> LLMs won't take anything away if you are still willing to take the time to understand what it's actually building and why things are done that way.

Isn't this learning swimming by watching others explaining swimming? Bob would think he knows swimming, until he has to get into the water.

"LLMs won't take anything away if you are still willing to take the time to understand what it's actually building"

But do you actually understand it? The article argues exactly against this point - that you cannot understand the problems in the same way when letting agents do the initial work as you would when doing it without agents.

from the article: "you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we've collectively decided that maybe this time it's different. That maybe nodding at Claude's output is a substitute for doing the calculation yourself. It isn't. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient."

This is the first fundamental flaw of the article

> Bob's weekly updates to his supervisor were indistinguishable from Alice's. The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

No they won't be. They might be worse. They might be better. But they'll be very different.

And, like you said...

> Alice and Bob had the same year. One paper each.

No they won't. Alice would've taken a year. Bob would've taken a few days.

You've already covered why that might actually be OK, so I'll talk about the author's other error:

> This sounds idealistic until you think about what astrophysics actually is. Nobody's life depends on the precise value of the Hubble constant. No policy changes if the age of the Universe turns out to be 13.77 billion years instead of 13.79. Unlike medicine, where a cure for Alzheimer's would be invaluable regardless of whether a human or an AI discovered it, astrophysics has no clinical output. The results, in a strict practical sense, don't matter. What matters is the process of getting them: the development and application of methods, the training of minds, the creation of people who know how to think about hard problems. If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.

Keep asking why. why does the development of the application and methods, the training of minds matter?

the goal isn't abstract. The goal is still ultimately for the benefit of humanity, just like the cure for Alzheimer's.

Humanity learned physics, so we made rockets and now we have satellites, and the entire planet is connected with communication and information.

Humanity must continue to invest in astrophysics so that we do not get wiped out by a single rogue asteroid barreling through the cosmos, like the dinosaurs did.

Now i'm not saying that there isn't other benefit to making generically intelligent humans that know how to think. But at the end of the day, the purpose of astrophysics is no less existential than the purpose for developing medicine.

I want to know the age of the universe so that we can understand what created it, and if we can reverse entropy, and if there is anything beyond the universe. That is a quest for humanity that will take hundreds if not millions of years.