> that’s just moving the goalposts on what AI can do- it will get there eventually
This is the nutshell of your argument. I’m not convinced. Technologies often hit a ceiling of utility.
Imagine a “progress curve” for every technology, x-axis time and y-axis utility. Not every progress curve is limitlessly exponential, or even linear - in fact, very few are. I would venture to guess that most technological progress actually mimics population growth curves, where a ceiling is hit based on fundamental restrictions like resource availability, and then either stabilizes or crashes.
I don’t think LLMs are the AI endgame. They definitely have utility, but I think your argument boils down to a bold prediction of limitless progress of a specific technology (LLMs), even though that’s quite rare historically.
I agree that LLM architecture might hit a ceiling (although the trajectory is still upward at present) but I meant Deep Learning in general
I do think there is a great deal of VC baiting hype in statements by Dario and Altman about ai coding but at the same time the progress has indeed been positive
We've finally proven or unlocked the secret to learning in machines - the only question is how fast that progress curve is - yes it might get stuck for a few years but I think this is really an inflection point that we’ve reached with these technologies