Does having 1 billion tokens mean more total tokens in the context window are actually good quality, or do we just get more dumb tokens?
Does having 1 billion tokens mean more total tokens in the context window are actually good quality, or do we just get more dumb tokens?
the article is almost entirely about this, yes.
Current approaches require fancy tricks to fit tokens into memory, and spread attention thinner over larger numbers of tokens. The new approach tries to find a way to keep everything in a single shared memory, and process the tokens in parallel using multiple GPUs