The idea that AI can discover anything is ridiculous. It can propose algorithms like it creates any piece of text, but only the human researcher is capable of analyzing the algorithm, proving that it works, understand what it is doing, i.e., pretty much everything that we call a new "discovery". I would have zero confidence in an algorithm "discovered" by an AI in isolation.
Theoretically if I were to type into an LLM "Write a novel compression algorithm for images that is at least 25% smaller at the same speed and quality as ___" and it did, and I ran the code (which I didn't understand) and it worked, wouldn't that count?
The odds of that working, though, are of course pretty near 0. But theoretically, it could happen.
You might find that if it did produce something, it might not be _novel_
I might, but by the same token, I might also find that it was novel?
It's a remote possibility, but it is a possibility, isn't it?
That is a problem human researchers face too.
As you say, the odds of this happening are very close to zero. But suppose for a minute that this was possible. Did you learn anything? Do you really have a discovery? Was this done using a novel method or applying something that already exists? If you give this to somebody else, should they believe it works? Is this result even understandable by human beings? You'd need to answer so many questions that in the end even this would NOT be a discovery by the machine but by yourself.
Scientists discover things that I don't understand every day.
A sufficiently advanced discovery in, say, mathematics can only be understood by other mathematicians. Does that make it less of a discovery? So what's wrong if a machine discovers something that can only be analysed and proved by other machines?
The mathematicians understand the discovery, so what's the problem? With "AI", nobody understands it. If you ask, the AI may say it understands the results, but it is probably lying.
It can propose algorithms which than it can _itself test and iterate on_