> then you also need to explain how they will be able to develop the skills to analyze and “run more thorough verification” INDEPENDENTLY of LLMs

I’m sure the students will manage. This is the exact same discussion we’ve all been through before, during the rise of Wikipedia, just wearing a new hat. The answer is “vet your sources, don’t trust unsourced claims.” The way they’ll develop the skills is the same way aspiring scientists and students have developed them throughout the entirety of human history’s vast corpus across time: by having good teachers teach them.

Here’s a very simple program I thought of from the top of my head in a minute or two. I’m sure people whose job it is to create educational content will be able to come up with something far better:

Design a small research project with as many LLM-tailored pitfalls as possible. It involves real measurements and real data, and the students may use their LLM to whichever extent they wish. Then, we compare results against the reference data, and find out all the myriads of ways in which LLMs can taint the data and the conclusions to be made from it, and then explore ways how to mitigate it.

Probably not perfect and nitpickable to oblivion, but also not the hardest mental exercise I’ve ever subjected myself to.

Science did fine in a world where information took years or decades to travel the globe, people thought diseases were spread by evil mojo and we had a grand total of four liquids circulating inside our bodies, and scientists saying the wrong things were actively hunted down and silenced. It got there. It’ll do fine in a world where you can semantically search every single written source model trainers could get their hands on _and_ ground the results with references to tangible sources using the same natural language query.

> The answer is “vet your sources, don’t trust unsourced claims.”

This was already a problem for Wikipedia (articles being written which upon further investigation were based on nothing but Wikipedia itself). With LLM themselves facilitating AI slop and plagiarism, this problem gets to a scale that it becomes impossible to control.

> I’m sure the students will manage.

The problem with your hubris is that you are not going to be the one solely facing the fallout when this blows up.