Look for something in the 500m-3b parameters range. 3 might push it...
SmolVLM is pretty useful. https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct
Look for something in the 500m-3b parameters range. 3 might push it...
SmolVLM is pretty useful. https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct
It is feasible to run 7B, 8B models with q6_0 in 8GB VRAM, or q5_k_m/q4_k_m if you have to or want to free up some VRAM for other things. With q4_k_m you can run 10B and even 12B models.