So I studied Machine Learning too. One of the main things I learned is that for any problem there is an ideally sized model that when trained will produce the lowest error rate. Now, when you do multi-class learning (training a model for multiple problems), that ideally sized model is larger but there is still an optimum sized model. Seems to me that for GAI, there will also be an ideally sized model. I wouldn't be surprised if the complexity of that model was very similar to the size of the human brain. If that is the case, then some sort of super-intelligence isn't possible in any meaningful way. This would seem to track with what we are seeing in the today's LLMs. When they build bigger models, they often don't perform as well as the previous one which perhaps was at some maximum/ideal complexity. I suspect, we will continue to run into this barrier over and over again.
> for any problem there is an ideally sized model that when trained will produce the lowest error rate.
You studied ML before discovery of "double descent"?
https://youtu.be/z64a7USuGX0
I did, however I have also observed ideal sized models since then in algorithms designed with knowledge of it.