FP32 is best, although I wonder if there isn’t something better I don’t know about. Q8 is for the most part equal to FP16 in practical terms by being smart about what is quantized, but iirc always slower than FP16 and FP8.
FP32 is best, although I wonder if there isn’t something better I don’t know about. Q8 is for the most part equal to FP16 in practical terms by being smart about what is quantized, but iirc always slower than FP16 and FP8.