> I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?

From a science standpoint, I'd say whatever "results" you got are completely worthless.

> I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct

And how do you know if your understanding is correct, if you are only taking what the LLM gives to you and you are not able to verify independently?

> Science is what happens when you expect something, test something, and get a result.

Right, but has any LLM come up with any hypothesis on its own? Has any AI said "given all this literature that I read, I'd expect <insert something completely out of the training data space>?".

Asking all of these questions after (allegedly) reading my entire comment either means you didn't pay attention, in which I'm not going to spend any more effort responding either; or you've completely missed the point, in which case I can probably save myself the effort anyway. In any case, if you're genuinely interested in answers to your questions instead of merely posturing, I suggest you re-read carefully and then make a better faith attempt at engaging with it.

I'll leave these direct quotes from the comment as a hint:

> But that only matters if I take its output at face value. […] If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification.

The problem I have with your logic is that you are hedging your arguments so much that the whole point become meaningless. If you are trying to argue that young aspiring scientists will be able to use LLMs to learn new concepts instead of doing the hard work themselves, then you also need to explain how they will be able to develop the skills to analyze and "run more thorough verification" INDEPENDENTLY of LLMs.

> then you also need to explain how they will be able to develop the skills to analyze and “run more thorough verification” INDEPENDENTLY of LLMs

I’m sure the students will manage. This is the exact same discussion we’ve all been through before, during the rise of Wikipedia, just wearing a new hat. The answer is “vet your sources, don’t trust unsourced claims.” The way they’ll develop the skills is the same way aspiring scientists and students have developed them throughout the entirety of human history’s vast corpus across time: by having good teachers teach them.

Here’s a very simple program I thought of from the top of my head in a minute or two. I’m sure people whose job it is to create educational content will be able to come up with something far better:

Design a small research project with as many LLM-tailored pitfalls as possible. It involves real measurements and real data, and the students may use their LLM to whichever extent they wish. Then, we compare results against the reference data, and find out all the myriads of ways in which LLMs can taint the data and the conclusions to be made from it, and then explore ways how to mitigate it.

Probably not perfect and nitpickable to oblivion, but also not the hardest mental exercise I’ve ever subjected myself to.

Science did fine in a world where information took years or decades to travel the globe, people thought diseases were spread by evil mojo and we had a grand total of four liquids circulating inside our bodies, and scientists saying the wrong things were actively hunted down and silenced. It got there. It’ll do fine in a world where you can semantically search every single written source model trainers could get their hands on _and_ ground the results with references to tangible sources using the same natural language query.

> The answer is “vet your sources, don’t trust unsourced claims.”

This was already a problem for Wikipedia (articles being written which upon further investigation were based on nothing but Wikipedia itself). With LLM themselves facilitating AI slop and plagiarism, this problem gets to a scale that it becomes impossible to control.

> I’m sure the students will manage.

The problem with your hubris is that you are not going to be the one solely facing the fallout when this blows up.