Interesting, I’ve always thought neural network progress was primarily bottlenecked by compute.
If it turns out that LLM-like models can produce genuinely useful outputs on something as constrained as a Commodore 64—or even more convincingly, if someone manages to train a capable model within the limits of hardware from that era—it would suggest we may have left a lot of progress on the table. Not just in terms of efficiency, but in how we framed the problem space for decades.
Very, very cool project though!
not useful in a disaster scenario:
YOU> HELP I'M DROWNING
C64> YOU' HERE!
YOU> OH NO I'M ON FIRE
C64> IGLAY!
YOU> IM BEING SWALLOWED BE A SNAKE
C64>
YOU> BIRDS ARE NIPPING ON ME
C64> YOU
Reminds me of Terry Davis' random word generator :')
Maybe there is deeper wisdom in there that we have yet to unearth
Next-word prediction features always existed for flip phones...