DwarfStar4 is a small LLM inference runtime that can run DeepSeek 4. The blog post implies that it currently requires 96GB of VRAM.

For others who are lacking context :-)

Thanks. Outside of LLM circles, DS4 is usually a video game controller.

Well, I was sitting here expecting the Redis creator have an opinion on still-unannounced Dark Souls 4.

Haha the same here!!

Or a car from Citroen

Technically DS is an independent sibling of Citroën within Stellantis, a sprawling car conglomerate that owns a dog’s dinner of car brands in Europe and USA.

It's still the Lexus to Citroen's Toyota.

If we want to get really technical, “DS4” is a model from Citroën and they later spun out the DS lineup into its separate brand, with the “Citroën DS4” becoming “DS 4”, “DS” being the make and “4” being the model.

And even more pedantically, DS has recently adopted a new naming scheme where the former DS 4 is now written as DS N°4, pronounced "number 4"...

Their stated inspiration for this SEO bomb is Chanel perfumes.

Pavlov's dog's dinner?

Trekkies are experiencing a major regression from Deep Space Nine.

They never should have trusted Qwark

I am actually kind of disappointed it wasn't a deep dive on the dual shock 4

That's the flash version not the full model and only at Q2-3~ so while impressive it's still quite different from the full model.

Not really. I'm building now another fast C compiler with DeepSeek 4 Flash, and rarely have to step outside to use Pro or Sonnet, gpt or kimi-2.6. Flash is very capable of almost everything.

> The blog post implies that it currently requires 96GB of VRAM.

Has anyone tested what happens if you try and run this on lower-RAM Macs? It might work and just be a bit slower as it falls back on fetching model layers from storage.

It'd be way slower since you'd be doing that work every token

True (with 64GB RAM it'd have to fetch 20% of its active experts from disk already, about 650MB/tok at 2-bit quant - and that percentage rises quickly as you lower RAM further); my question is just a more practical one about whether it runs at all, how bad the slowdown is, and to what extent you might be able to get some of that decode throughput back by running multiple (slower) agent sessions in parallel under a single Dwarf Star 4 server.

Thanks. How is DwarfStar4 different from llama.cpp?

I knew Death Stranding 3 wasn't out yet!

>The blog post implies that it currently requires 96GB of VRAM.

From the Github page it seems it only supports Apple and DGX Spark. I have 128 GB of RAM and a 3090 but it probably won't work.

FYI, llama.cpp (which antirez/ds4 is inspired by) supports system ram. E.g. [1] is a good guide for running a similar-sized model with 128gb ram and a 3090-sized GPU.

[1] https://unsloth.ai/docs/models/tutorials/minimax-m27

(Unsloth's deepseek-v4 support is still WIP)

Thanks, I can run Qwen 3.6 27B with vllm, but I was curious about antirez tool.

Have you had it getting stuck in endless loops maybe ~10-20% of the invocations? Seems it happens for both the responses and chatcompletion APIs, and no matter what inference parameters I try it happens at least for 1/10 of the requests, I've tried every compatible vLLM version + currently using it from git (#main) yet the issue persists.

Seems to happen with various quantizations too, even the NVFP4 versions and any others, so seems like a deeper issue to me, or hardware incompatible perhaps.

It wouldn’t be useful with your setup, probably 3-4 token per second.

Yep, maybe I can open a feature request if it makes sense technically.

Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.