Can someone ELI5 why the safetensor file is 23.8 GB, given the 12B parameter model? Does the model use closer to 24 GB of VRAM or 12 GB of VRAM. I've always associated a 1 billion parameter = 1 GB of VRAM. Is this estimate inaccurate?
Can someone ELI5 why the safetensor file is 23.8 GB, given the 12B parameter model? Does the model use closer to 24 GB of VRAM or 12 GB of VRAM. I've always associated a 1 billion parameter = 1 GB of VRAM. Is this estimate inaccurate?
Quick napkin math assuming bfloat16 format : 1B * 16 bits = 16B bits = 2GB. Since it's a 12B parameter model, you get around ~24GB. Downcasting to bfloat16 from float32 comes with pretty minimal performance degradation, so we uploaded the weights in bfloat16 format.
A parameter can be any size float. Lots of downloadable models are FP8 (8 bits per parameter), but it appears this model is FP16 (16 bits per parameter)
Often, the training is done in FP16 then quantized down to FP8 or FP4 for distribution.
I think they are bfloat16, not FP16, but they are both 16bpw formats, so it doesn't make a size difference.
Wiki article on bfloat16 for reference, since it was new to me: https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
pardon the ignorance but it's the first time I've heard of bfloat16.
i asked chat for an explanation and it said bfloat has a higher range (like fp32) but less precision.
what does that mean for image generation and why was bfloat chosen over fp?
My fuzzy understanding, and I'm not at all an expert on this, that the main benefit is that bf16 is less prone to overflow/underflow during calculation, which is a source of bigger problems in both training and inference than the simple loss of precision, so once it became widely supported, it became a commonly-preferred format for models (whether image gen or otherwise) over FP16.
That's a good ballpark for something quantized to 8 bits per parameter. But you can 2x/4x that for 16 and 32 bit.
I've never seen a 32 bit model. There's bound to be a few of them, but it's hardly a normal precision.
Some of the most famous models were distributed as F32, e.g. GPT-2. As things have shifted more towards mass consumption of model weights it's become less and less common to see.
> As things have shifted more towards mass consumption of model weights it's become less and less common to see.
Not the real reason. The real reason is that training has moved to FP/BF16 over the years as NVIDIA made that more efficient in their hardware, the same reason you're starting to see some models being released in 8bit formats (deepseek).
Of course people can always quantize the weights to smaller sizes, but the master versions of the weights is usually 16bit.
And on the topic of image generation models, I think all the Stable Diffusion 1.x models were distributed in f32.