> What do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven't been able to make a single new connection that has led to a discovery?
If that's what you're experiencing, then you're not asking them the right questions.
If you're at the edge of your field so you're able to judge whether something is novel or not, and you have a direction you'd like the LLM to explore, just ask it. Prompt it to come up with some ideas of how to solve X, or categorize Y, or analyze Z. Encourage it to take ideas from, or find parallels in, closely related or distantly related fields.
You will probably quickly find yourself with a ton of new ideas, of varying quality, in the same way as if you were brainstorming with a colleague.
But they don't work "solo". They need to you guide the conversation. But when you do, they're chock-full of new ideas and connections and discoveries. But again -- just like with people, the quality varies. If you're looking for a good startup idea, you need to sift through hundreds. Similarly if you're looking for an idea of a paper you could publish, there are a lot of hypotheses to sift through. And you're supplying your own expert "good taste" to try to determine what's worth pursuing and developing further, etc.
LLMs don't just magically come up with new proven discoveries unprompted. But they turn out to be fantastic research and idea-generation partners. They excel at combining existing related-but-distant facts and models and interpretations in novel ways.