> I read a paper yesterday where someone had used an LLM to read other papers and was claiming that this was doing science.

I'm not trying to be facetious or eye-poking here, I promise... But I have to ask: What was the result; did the LLM generate useful new knowledge at some quality bar?

At the same time, I do believe something like "Science is more than published papers; it also includes the process behind it, sometimes dryly described as merely 'the scientific method'. People sometimes forget other key ingredients, such as a willingness to doubt even highly-regarded fellow scientists, who might even be giants in their fields. Don't forget how it all starts with a creative spark of sorts, an inductive leap, followed by a commitment to design some workable experiment given the current technological and economic constraints. The ability to find patterns in the noise in some ways is the easiest part."

Still, I believe this claim: there is NO physics-based reason that says AI systems cannot someday cover every aspect of the quote above: doubting, creativity, induction, confidence, design, commitment, follow-through, pattern matching, iteration, and so on. I think question is probably "when", not "if" this will happen, but hopefully before we get there we ask "What happens when we reach AGI? ASI?" and "Do we really want that?".

There's no "physics-based" reason a rat couldn't cover all those aspects. That would truely make Jordan Peterson, the big rat, the worlds greatest visionary. I wouldn't count on it though.