They weren't called LLMs, but they had neural networks and hardware optimizations for AI and huge teams of people tirelessly labeling stuff to make it look smarter than it is :)
There is some surprise factor at the GPT-3 -> gpt-4-1106 jump for people who know the history of AI generally and who were around a lab during the ImageNet days, but not as much as everyone is acting like.
The last two years are a notable but by no means unprecedented rush towards the next wall. There's even a term for it: AI Summer is short and AI winter is long and its about as predictable as the seasons in Game of Thrones / ASOIAF.
They weren't called LLMs, but they had neural networks and hardware optimizations for AI and huge teams of people tirelessly labeling stuff to make it look smarter than it is :)
There is some surprise factor at the GPT-3 -> gpt-4-1106 jump for people who know the history of AI generally and who were around a lab during the ImageNet days, but not as much as everyone is acting like.
The last two years are a notable but by no means unprecedented rush towards the next wall. There's even a term for it: AI Summer is short and AI winter is long and its about as predictable as the seasons in Game of Thrones / ASOIAF.