I love how this line of thinking completely avoids the issue re. improvements in local models.
I suppose if you are desperate to justify a large investment this what you would do - frame the story in a particular way.
I love how this line of thinking completely avoids the issue re. improvements in local models.
I suppose if you are desperate to justify a large investment this what you would do - frame the story in a particular way.
Local models are always going to be useless unless compute get significantly cheaper, and it's not. TSMC might literally run out of capacity to build any consumer compute product.
Once computer constraints ease up, you will see much larger models. The reason LLM seems to have stalled a bit is because there just not enough compute.
You have more people using AI which requires more compute, and you want to build larger models which requires more compute and you have limited compute. What do you do?
Right.. and computers were once the size of a large room vs now fit into a pocket.
" The reason LLM seems to have stalled a bit is because there just not enough compute."
lol okay mate.
> Right.. and computers were once the size of a large room vs now fit into a pocket.
and yet now we have far bigger rooms with far bigger computers anyway
Hardware may improve exponentially, but demand for compute increases double-exponentially. we'll always need more, bigger computers