Per my initial reading this thing is not only faster working with long context, but also very efficient storing it!

Super excited for a ~30B version.