> So if Bob can do things with agents, he can do things.
I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.
That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.
There's a long, detailed, often repeated answer to your open question in the article.
Namely, if you can't do it without the AI, you can't tell when it's given you plausible sounding bullshit.
So Bob just wasted everyone's time and money.
You can verify by running the code and see if it works.
Seriously? The article is about scientists learning to do science, not programming.
I know we're not supposed to say RTFA, but your comment really takes the cake.
My mistake then. There's so much AI discussion here on HN from the developer perspective, that it's easy to get lost.
But why wouldn't my comment apply to all science? You have to make experiments and gather evidence, otherwise it's not science, it's just academia.
If you read the article, it covers much of this. It's not just gathering evidence, but evaluating the data you've collected.