The idea of having local LLMs accessible in the browser for privacy concerning is nice i guess but when each browser has a different model attached to this API testing becomes even more a nightmare then now. I wonder if this will drive more users towards chrome because most of the usages of this API might be just tailored to fit the Gemini Nano model?

@tom1337 The testing fragmentation is the real problem here. Prompts are not model-agnostic in practice - a carefully tuned prompt for Gemini Nano 3 v2025 will silently degrade on whatever Gecko ships, and the API gives you no capability introspection to branch on. This is actually worse than the WebGL situation, where at least you could query extension support. Shipping a feature that depends on prompt quality against an unnamed, versioned-behind-the-browser model is closer to shipping a feature that depends on the user's installed dictionary.