> Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species
This is right, but we can already do that a little bit for domains with verification. AlphaZero is an example of alien-level performance due to non-human training data.
Code and math is kind of in the middle. You can verify it compiles and solves the task against some criteria. So creative, alien strategies to do the thing can and will emerge from these synthetic data pipelines.
But it's not fully like Go either, because some of it is harder to verify (the world model that the code is situated in, meta-level questions like what question to even ask in the first place). That's the frontier challenge. How to create proxies where we don't have free verification, from which alien performance can emerge? If this GPTZero moment arrives, all bets are off.