> We believe that all agents will long more like this in the future - long running, asynchronous, more autonomous. Specifically, we think that they will:
> Run asynchronously in the cloud
> cloud
Reality check:
https://huggingface.co/Menlo/Jan-nano-128k-gguf
That model will run, with decent conversation quality, at roughly the same memory footprint as a few Chrome tabs. It's only a matter of time until we get coding models that can do that, and then only a further matter of time until we see agentic capabilities at that memory footprint. I mean, I can already get agentic coding with one of the new Qwen3 models - super slowly, but it works in the first place. And the quality matches or even beats some of the cloud models and vibe coding apps.
And that model is just one example. Researchers all over the world are making new models almost daily that can run on an off-the-shelf gaming computer. If you have a modern Nvidia graphics card, you can run AI on your own computer totally offline. That's the reality.
I'm also excited for local LLM's to be capable of assisting with nontrivial coding tasks, but we're far from reaching that point. VRAM remains a huge bottleneck for even a top-of-the-line gaming PC to run them. The best these days for agentic coding that get close to the vibe-check of frontier models seem to be Qwen3-Coder-480B-A35B-Instruct, DeepSeek-Coder-V2-236B, GLM 4.5, and GPT-OSS-120B. The latter being the only one capable of fitting on a 64 to 96GB VRAM machine with quantization.
Of course, the line will always be pushed back as frontier models incrementally improve, but the quality is night and day between these open models consumers can feasibly run versus even the cheaper frontier models.
That said, I too have no interest in this if local models aren't supported and hope that's down the pipeline just so I can try tinkering with it. Though it looks like it utilizes multiple models for various tasks (planner, programmer, reviewer, router, and summarizer) so that only adds to the difficulty of the VRAM bottleneck if you'd like to load different models per task. So I think it makes sense for them to focus on just Claude for now to prove the concept.
edit: I personally use Qwen3 Coder 30B 4bit for both autocomplete and talking to an agent, and switch to a frontier model for the agent when Qwen3 starts running in circles.
> and GPT-OSS-120B. The latter being the only one capable of fitting on a 64 to 96GB VRAM machine with quantization.
Tiny correction: Even without quantization, you can run GPT-OSS-120B (with full context) on around ~60GB VRAM :)
Hm I don't think so. You might be thinking about the file size, which is ~64GB.
> Native MXFP4 quantization: The models are trained with native MXFP4 precision for the MoE layer, making gpt-oss-120b run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the gpt-oss-20b model run within 16GB of memory.
If you _could_ fit it within ~60GB VRAM, the variability of the amount of VRAM required for certain context lengths and prompt sizes would OOM pretty quickly.
edit: Ah and MXFP4 in itself is a quantization, just supposedly closer to the original FP16 than the rest with a smaller VRAM requirement.
> Hm I don't think so. You might be thinking about the file size, which is ~64GB.
No, the numbers I put above is literally the VRAM usage I see when I load 120B with llama.cpp, it's a real-life number, not theoretical :)
Data storage has gotten cheaper and more efficient/manageable every year for decades, yet people seem content with having less storage than a mid-range desktop from a decade and a half ago, split between their phone and laptop, and leaving everything else to the "> cloud" - I wouldn't be so sure we're going to see people reach for technological independence this time either.
One factor here is people preferring portable devices. Note that portable SSDs are also popular.
Also, usage patterns can be different; with storage, if I use 90% of my local content only occasionally, I can archive that to the cloud and continue using the remaining local 10%.
Do you know what "MCP-based methodology" is? I am skeptical of a 4B model scoring twice as high as Gemini 2.5 Pro
From the paper:
> Most language models face a fundamental tradeoff where powerful capabilities require substantial computational resources. We shatter this constraint with Jan-nano, a 4B parameter language model that redefines efficiency through radical specialization: instead of trying to know everything, it masters the art of finding anything instantly. Fine-tuned from Qwen3-4B using our novel multi-stage Reinforcement Learning with Verifiable Rewards (RLVR) system that completely eliminates reliance on next token prediction training (SFT), Jan-nano achieves 83.2% on SimpleQA benchmark with MCP integration while running on consumer hardware. With 128K context length, Jan-nano proves that intelligence isn't about scale, it's about strategy.
> For our MCP evaluation, we used mcp-server-serper which provides google search and scrape tools
https://arxiv.org/abs/2506.22760
Yeah I know about Model Context Protocol. But it's still only a small part of the AI puzzle. I'm saying that we're at a point now where a whole AI stack can run, in some form, 100% on-device with okayish accuracy. When you think about that, and where we're headed, it makes the whole idea of cloud AI look like a dinosaur.
I mean, I am asking what "MCP-based methodology" is, because it doesn't make sense for a 4B model to outperform Gemini 2.5 Pro et al by that much.
I'm not too sure what "MCP-based methodology" is, but Jan-nano-128k is a small model specifically designed to be able to answer in-depth questions accurately via tool-use (researching in a provided document or searching the web).
It outperforms those other models, which are not using tools, thanks to the tool use and specificity.
Because it is only 4B parameters, it is naturally terrible at other things I believe-it's not designed for them and doesn't have enough parameters.
In hindsight, "MCP-based methodology" likely refers to its tool-use.