> Creating something smarter than you was always going to be a sketchy prospect.
Sure, but not so sure that this has any relevance to the topic at hand. You seem to be taking the assumption that LLMs can ever reach that level for granted.
It may be possible that all it takes is scaling up and at some point some threshold gets reached past which intelligence emerges. Maybe.
Personally, I'm more on board with the idea that since LLMs display approximately 0 intelligence right now, no amount of scaling will help and we need a fundamentally different approach if we want to create AGI.