This API seems perfect for an idea I've had for a while: a de-snarkifier for social media.
Social media can be intellectually stimulating and educational, but it's also easy to get sucked into ideological sniping and flamewars, even if you didn't go looking for it. The emotional and intellectual energy spent flaming strangers on the Internet is a complete waste of human capital.
With an API like this, I assume you could have a browser extension that could de-snarkify content before showing it to you. You could ask the LLM to preserve all factual content from the post, but to de-claw any aggressive or snarky language. If you really wanted to have fun, you could ask it to turn anything written in an aggressive tone into something that sounds absurd or incompetent, so that the more aggressive the post, the more it would make the author look silly.
This could have a double benefit. For the reader, it insulates them from the personal attacks of random strangers on the Internet. Don't get me wrong, there is a time and a place for real, charged arguments about important issues that affect us all. But there is little to be gained from having those fights with strangers; on the contrary, I think it poisons the body politic when strangers are screaming at each other.
For the writer, it takes away any incentive to be snarky or rude. If other people filter their content this way, there's no point in trying to be mean to them, and no "race to the bottom" for who can be more nasty.
Kinda looking forward to something like this, as it has the potential to remove empty junk calories from the internet, hopefully leading to SIGNIFICANTLY less use of today's popular platforms.
My wish list:
- Eliminate ALL clickbait titles and ads. I only want to see a dry factual title.
- For any given topic, I only care about the main article (with the option to only see a summary, unless its a high quality blog) and couple of substantive comments, rest is junk I don't want to see.
The current state of popular social media sites has meant that I don't use it at all (except HN, which is trending in the same direction due to saturation with AI), but every other week or so I end up wasting a few hours, which I'd like to avoid entirely.
Ideally this would lead to 98% of content filtered/summarised out, and over time only use the internet for looking things up with intention. I want this to remove majority of "entertainment" value from the internet (by default) so that time/energy can be refocused in real life and high quality sources (books) only.
I actually have built myself a personal AI agent that does this for nthe main news headlines and for a summary of my personal email (sadly I can’t run it on work email yet). It can extract any actions required from a mail and make them into tasks, and also has a killer feature - a “sort out my email” button that archives all the emails it classifies as FYI, spam, mailing list or moot (it has classifiers for this), first producing a one-pager markdown summary of the whole lot in one shot, leaving all emails marked “action required” or “Urgent”. Email summaries are deliberately dry and factual with all advertising false urgency removed.
I can manually “hold” emails so they don’t go in the “sort out my email” woodchipper. It’s been life-changing.
Or just ignore it. Or say you will not engage under [conditions]. Ultimately it will be you who looks foolish when the AI rewrote something incorrectly and you engaged with something that wasn't being said.
That is unironically exactly what I want from social media.
I want the option to engage with the substance of new developments in the world, technology, etc. without the drama. I don't want to be drawn into the drama of strangers (who could, for all I know, just be bots or ragebaiting AIs).
If I want drama, there's plenty of it on TV, or I could talk to my friends about what is going on with people I actually know.
The anti-pattern, in my mind, is logging on to engage with substantive content and to be inadvertently drawn into flamewars with strangers.
Are humans supposed to enjoy the "flavor" of diarrhea, as the result of giving every village idiot a microphone so they can spew shit from their mouths?
Sure, you might say this sort of thing is boiling flavor out of your food, but... boiling the bacteria out of what you consume isn't a bad thing.
For YouTube, this already exists and I‘m using it. The extension is caller DeArrow and aims to reduce sensationalism via crowdsourcing, though I wouldn’t be surprised if top contributors are bots using LLMs.
Man, that before-after slider on the home page makes me so sad... YouTube used to just be random people sharing cool stuff, and those de-sensationalized titles really brought me back to that time for a second! Cool stuff.
For people like me had tried it in the past and found it annoying, note that it now has a 'casual' mode where it only changes the truly useless titles and leaves reasonable ones alone.
But... It's the type of idea that is unpredictable as it comes into contact with reality. If it works, it probably works very differently from the initial idea of how it will work.
Yeah, I 100% agree with the caution in this comment.
I see the merit in such a proposal. It's the linguistic equivalent to boiling the food you consume, instead of eating it raw with all the associated bad stuff.
The problem is, as you said, that this plan is unlikely to be as rosy as it's portrayed and probably has a lot of drawbacks in real life.
I wasn't even talking about drawbacks, though that applies too.
I mean... you would be basically taking a complex thing, transforming and reconstructing it. What we want out of social media isn't a simple, legible function. The positives. You'd have to discover them.
If someone starts building with the intitial idea above, my guess is that they'd end up with some sort of custom feed that draws inspiration and inputs from social media... but isn't social media. It's something else that you can scroll, read and whatnot.
That is exactly what I want. A boring but factual summary of useful nuggets from the mountain of shite that is ALL of social media. For example, on any given day, reddit/X/Bluesky/HN only has a couple of paragraphs worth of stuff that I care to know about. I want to train my brain to equate the internet with something boring that's only worth visiting when I need to look up information. I want this tech to reduce my (and hopefully others') use of internet to down by 98%.
I want to go to news.ycombinator.com/reddit.com/etc on any given day and just see a couple of paragraphs and maybe a few reference links to follow if I so choose. Spend a few minutes reading that and close it.
All of that in the hope of diverting my limited time/energy on Earth to endeavours in real life with real people.
Don't you think its better to just curate your social media and follow communities where the default is not toxicity? This is basically a distortion layer for reality and will just encourage more echo chambers.
Also what is toxic to one person is not toxic to another depending on their subjective choices. How will you solve for this without everyone just seeing what they want to see even if reality is not like that? I feel that will just enhance the problems of social media than reduce it.
It kind of falls apart when you start to think of edge cases rather than "hey this tool will keep morons off my feed!" mentality
Perhaps we could have one column of text that contains the content with no tone, and a second column of text that contains only the tone with no content.
Really? Not having to face any pushback would be better?
Half the reason people steelman others' arguments is for the emotional exercise of being able to accept opposing views. And you want to throw that away so you dont have to overcome a little friction in your day? Even though doing so improves you
I think pushback is different from snarky and/or aggressive. The devil's in the details I can imagine many ways to disagree with someone that would get past this tool as described.
Think about actual human psychology for a minute- modern humans are nothing like people from 500 or 1000 years ago. Before instant communication around the globe, behavior was not anonymous. You ran your mouth off, you get socially punished in your village.
Life was both more harsh (you can randomly die from an infection, etc) but also more psychologically healthier in certain ways. You had much more of a sense of "belonging" within your clan/village/etc. Being socially ostracized was a real punishment, not just people casually running off their mouths.
I think the allegations of "snowflake" would be really interesting if you flip the assumption on its head. (And I've spent plenty of time on 4chan, nothing you say can hurt me). Instead, assume "snowflake" is actually the intended default for human psychological health; and flip other assumptions, like assume groupthink is actually an evolutionary survival strategy... and then see what conclusions you draw from that.
haberman's requested translation (that would cause the comment above to be filtered out): this stranger on the internet has nothing useful to add and so their comment does not appear.
How do you envision short term and long term target usage of it?
And do you guys communicate between other browsers when doing something like this to try to settle on something common? I don't mean W3C but practically, it's a small world after all.
I can't speak for "you guys" anymore, as I'm retired, but from my personal perspective/recollection:
The target usage for the prompt API is anything that would benefit from the general capabilities of a language model, and can't be encompassed by the more-specific APIs for summarization/writing/rewriting. Realistic use cases currently are things like sentiment analysis, keyword extraction, etc. I have a number of ideas on how to integrate it into my current retirement project around Japanese flashcards, e.g. generating example sentences. If the small (~10 GiB) model class keeps getting smarter, the class of things possible on-device in this way gets larger and larger over time.
But overall, yeah, the goal with the prompt API, as with all web APIs, is to put something out there for discussion as early as possible, and get input from the broad community, especially including other browsers, to see if it's something that they are interested in collaborating on. https://www.chromium.org/blink/guidelines/web-platform-chang... (which I also wrote) goes into how the Chromium project thinks about such collaboration in general.
This looks like it uses Gemini Nano under the hood. But the latest Gemma4 E2B and E4B models appear to be much better, so you'd probably be better off deploying quantized versions through an extension for now.
I no longer have any inside knowledge, but from my time on this team they were very quick about getting the latest small (Google) models into Chrome. I expect that if Gemma 4 (or its equivalent Gemini Nano) isn't already in Chrome, then it will be soon.
Note that the article here was last updated 2025-09-21, and as of that time it was already on Gemini Nano 3.
It works, I've shipped this as a "local inference"/poor person's ollama for low-end llm tasks like search. The main win is that it's free and privacy preserving, and (mostly) transparent to users in that they don't have to do anything, which is great for giving non-technical users local inference without making them do scary native things.
But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back. That's unfixable until operating systems start reliably shipping their own prebaked models that an API like this could plug into.
Is it actually privacy preserving? Chrome mostly exists to extract all the information from a user it can without immediately getting a lawsuit of greater penalty than what is gained through ads, etc. Android isn't too far off either. I would welcome any alternative to this. I can see applications for this being things like "while device is at rest and charging summarize all of the users recent text communications" or whatever else as a legal loop hole for wiretap laws
> But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back.
With MoE models, you could fetch expert layers from the network on demand by issuing HTTP range queries for the corresponding offset, similar to how bittorrent downloads file chunks from multiple hosts. You'd still have to download shared layers, but time to first token would now be proportional to active-size rather than total-size. Of course this wouldn't be totally "offline" inference anymore, but for a web browser feature that's not a key consideration.
> With MoE models, you could fetch expert layers from the network on demand
This is a common misconception, probably due to the unfortunate naming. Expert layers are not "expert" at any particular subject, and active-size only refers to the activated layers per token. You'd still need all (or most of all) the layers for any particular query, even if some layers have a very low chance of being activated.
All in all, you'd be better off with lazy loading the entire model, at least you'd know you have the capability to run inference from then on.
Ultimately it would amount to lazy-loading the model, but the parameters themselves would be fetched from the network as needed, which still decreases time-to-first-token. It's true that "expert" choices will span most of the model, regardless of any particular "subject" or "topic" choice, but if we simply care about time-to-first-token it's still a viable strategy.
Would it be less dystopian for Operating Systems to ship with their own browser that ships with their own models? Or do you find the current situation where Operating Systems ship with browsers dystopian?
> It works, I've shipped this as a "local inference"/poor person's ollama for low-end llm tasks like search
fantastic!
> the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back
sure but does this mean the model is lazily downloaded? that is, if I used this and I am the first time the model was called, the user would be waiting until the model was downloaded at that point?
that sounds like a horrible user experience - maybe chrome reduces the confusion by showing a download dialog status or similar?
The model download is lazy and cached, so it's a one-time cost presumably across all origins (I assume so since the alternative would be a trivial DoS waiting to happen).
So it's once per browser, not once per site.
You can track the download state yourself and pop whatever UI you want.
chrome://on-device-internals reports "Model Name: v3Nano Version: 2025.06.30.1229 Folder size: 4,072.13 MiB" on a random Windows machine I just checked.
Thank You stranger! I would have assumed the size would vary based on whether your hardware supports the high-quality GPU backend (4 GB) or defaults to a smaller CPU-compatible version (3 GB) but the 22GB note on that page is really confusing. Even if it was including the model server where's the remaining 18GB going towards?
I'd imagine that the 22GB was decided through modelling various scenarios. For a start, it's not just a 4GB current model, it's 2x4GB to be able to update it without needing time when the computer is without a model, that's up to 8GB.
Then it's possible the model you get will scale with the CPU/GPU/RAM available, so if you have a 12GB GPU you probably get a better model, perhaps that's a 10-11GB model? At 2x that's 22GB.
Then consider that a machine is not static, GPUs/hardware come and go, VRAM allocation in integrated graphics changes, etc. You end up with just needing to pick a number and not confuse users.
This is part of it, and also we just didn't want to use up the last of the user's disk space! It's disrespectful to use up 3 GB if the user only has 4 GB left; it's sketchy if the user only has 10 GB. At 22 GB, we felt there was more room to breathe.
One could argue that users should have more agency and transparency into these decisions, and for power users I agree... some kind of neato model management UI in chrome://settings would have been cool. But 99% of users would never see that, so I don't think it ever got built.
> `> Storage: At least 22 GB of free space on the volume that contains your Chrome profile.`
Yes, I can read and comprehend English and you should assume I read the page. Because of the "At least" wording, I was curious what a person who has actually used the feature has noticed, aka, learning from people who have actually done it already.
I think it's a step into a future of proper Model API.
But it's just a small step.
It reminds me of Apple's Foundation Models [1]
While many AI integrations are focused on text communication / chat style.
A lot of software benefits from non-text interfaces.
I believe at some point OSes and browsers should provide an API to manage models so you'll have access to on-device/remote ones with a simplified interface for the app.
Making something standardized that is cross-platform would be fantastic. It also needs to be on mobile devices, so the players that can easily make it happen are mostly Apple and Google.
(Meta will follow or vice-versa I guess)
Key-point: it shouldn't be exclusive to promoted models.
The idea of having local LLMs accessible in the browser for privacy concerning is nice i guess but when each browser has a different model attached to this API testing becomes even more a nightmare then now. I wonder if this will drive more users towards chrome because most of the usages of this API might be just tailored to fit the Gemini Nano model?
@tom1337 The testing fragmentation is the real problem here. Prompts are not model-agnostic in practice - a carefully tuned prompt for Gemini Nano 3 v2025 will silently degrade on whatever Gecko ships, and the API gives you no capability introspection to branch on. This is actually worse than the WebGL situation, where at least you could query extension support. Shipping a feature that depends on prompt quality against an unnamed, versioned-behind-the-browser model is closer to shipping a feature that depends on the user's installed dictionary.
It's a tiny script that looks up the rss feed and uses the content to generate summaries; quite a nice fit with our static site. Sometime I'd like to extend it to ask different questions about the content.
Seems like a good way for a rogue JS script to offload token generation to a bunch of unsuspecting visitors
It would actually be pretty interesting to see if its possible to decentralize the compute to generate something useful from a larger prompt broken down and sent to a bunch of browsers using a subagent pattern or something like RLM, each working on a smaller part of the prompt
This feels like a lot of work for low reward, the technical/business infrastructure would be wild. And if anyone wants to offload their prompts to users browsers, they might as well just use the Chrome API correctly? How many server side prompts would realistically be useful to offload to a low end model like this?
Plus even if you really wanted to do that, WebGPU exists and has for a while right?
> How many server side prompts would realistically be useful to offload to a low end model like this?
There's a lot of ways this API could go, e.g. more powerful models eventually, or perhaps integration with cloud models. For example, I could see Google trying to default Gemini as the model for users signed into Chrome
I think we’ll get more powerful models when they become reasonable to run on regular people’s computers, in which case the compute costs would hopefully fall enough that people don’t need to resort to this kind of weird stuff.
As for cloud models, that would be interesting, although I guess then the fraud would be easier in spoofing whatever parameters (ip address? domain name? some Chrome install identifier?) to get around whatever rate limiting they come up with, rather than actually using people’s computers.
Anyways I’m sure if it ends up being abused, they can throw a permissions dialog in front of it. Just need to figure out a way to make normal people understand.
Low per-device reward combined with a high user count - either by large legitimate players or by botnets - has been the monetisation strategy of most online enterprises.
Fwiw - I did a fairly large comparison of Gemini Nano (the in browser ai model) vs a comparable free hosted model of Gemma (from OpenRouter) and the hosted model absolutely trashed the local model on every aspect of speed, reliability, availability, etc. [1]
I'm not particularly happy about that outcome as I wish we had more locally run AI models for reasons of privacy and efficiency, so this is more just a warning that at present there are some severe tradeoffs.
The better part of this is having a local-first AI, particularly because it has tool-calling builtin & structured output.
I haven't pushed out a full version[1] which uses ducklake-wasm + this to make a completely local SQL answering machine, but for now all it does is retype prompts in the browser.
More like "you need to sign up for our website and pay for a subscription", and I'd much rather do that if it's actually providing value. I am absolutely not going to run model locally which slowly churns out words at 5 tps while making the computer hot to touch.
I would very much like not to have to download 22 GB for some inference capability that is way worse than API calls both in terms of quality and speed.
I would rather pay money than seeing this thing running in my browser that only prints 5 tps on high-end consumer hardware.
Imagine a Vendor API that adds a way to link from the page straight into a device purchase workflow. As a trial of the API in Chrome you can order a new Google Pixel 9b directly from any page with the word Android in it!
Or a LocalNet API that integrates with trusted hardware devices on your local network. As a trial (Chrome beta programme — strictly limited but here’s 3x signup links to share with your friends) you can adjust your Google Next Mini underfloor heating directly from Chrome!
Or a DirectCast API that lets you stream <video> elements to a device of your choice even over a VPN. As a Chrome trial, you can use your Google Cloud account to stream directly from YouTube Premium to any linked Google Chromecast devices you own!
This API seems perfect for an idea I've had for a while: a de-snarkifier for social media.
Social media can be intellectually stimulating and educational, but it's also easy to get sucked into ideological sniping and flamewars, even if you didn't go looking for it. The emotional and intellectual energy spent flaming strangers on the Internet is a complete waste of human capital.
With an API like this, I assume you could have a browser extension that could de-snarkify content before showing it to you. You could ask the LLM to preserve all factual content from the post, but to de-claw any aggressive or snarky language. If you really wanted to have fun, you could ask it to turn anything written in an aggressive tone into something that sounds absurd or incompetent, so that the more aggressive the post, the more it would make the author look silly.
This could have a double benefit. For the reader, it insulates them from the personal attacks of random strangers on the Internet. Don't get me wrong, there is a time and a place for real, charged arguments about important issues that affect us all. But there is little to be gained from having those fights with strangers; on the contrary, I think it poisons the body politic when strangers are screaming at each other.
For the writer, it takes away any incentive to be snarky or rude. If other people filter their content this way, there's no point in trying to be mean to them, and no "race to the bottom" for who can be more nasty.
Kinda looking forward to something like this, as it has the potential to remove empty junk calories from the internet, hopefully leading to SIGNIFICANTLY less use of today's popular platforms.
My wish list:
- Eliminate ALL clickbait titles and ads. I only want to see a dry factual title.
- For any given topic, I only care about the main article (with the option to only see a summary, unless its a high quality blog) and couple of substantive comments, rest is junk I don't want to see.
The current state of popular social media sites has meant that I don't use it at all (except HN, which is trending in the same direction due to saturation with AI), but every other week or so I end up wasting a few hours, which I'd like to avoid entirely.
Ideally this would lead to 98% of content filtered/summarised out, and over time only use the internet for looking things up with intention. I want this to remove majority of "entertainment" value from the internet (by default) so that time/energy can be refocused in real life and high quality sources (books) only.
I actually have built myself a personal AI agent that does this for nthe main news headlines and for a summary of my personal email (sadly I can’t run it on work email yet). It can extract any actions required from a mail and make them into tasks, and also has a killer feature - a “sort out my email” button that archives all the emails it classifies as FYI, spam, mailing list or moot (it has classifiers for this), first producing a one-pager markdown summary of the whole lot in one shot, leaving all emails marked “action required” or “Urgent”. Email summaries are deliberately dry and factual with all advertising false urgency removed.
I can manually “hold” emails so they don’t go in the “sort out my email” woodchipper. It’s been life-changing.
Or just ignore it. Or say you will not engage under [conditions]. Ultimately it will be you who looks foolish when the AI rewrote something incorrectly and you engaged with something that wasn't being said.
It is important, however, not intellectualise repugnant, racist, or inflamatory language; it deserves to be called out for what it is aimed at doing
This is the Soylent of written communication. Full nutritional value with an unremarkable flavor.
That is unironically exactly what I want from social media.
I want the option to engage with the substance of new developments in the world, technology, etc. without the drama. I don't want to be drawn into the drama of strangers (who could, for all I know, just be bots or ragebaiting AIs).
If I want drama, there's plenty of it on TV, or I could talk to my friends about what is going on with people I actually know.
The anti-pattern, in my mind, is logging on to engage with substantive content and to be inadvertently drawn into flamewars with strangers.
Are humans supposed to enjoy the "flavor" of diarrhea, as the result of giving every village idiot a microphone so they can spew shit from their mouths?
Sure, you might say this sort of thing is boiling flavor out of your food, but... boiling the bacteria out of what you consume isn't a bad thing.
Ironically, the proposed extension would likely have neutered this comment to a shell of itself.
This is sanding the edges of off life. Its gonna make you soft
There's more to life than the Internet, social media, and anonymous trolls. This is sanding the edges off the Internet. It's gonna make you happier.
Sign me up
For YouTube, this already exists and I‘m using it. The extension is caller DeArrow and aims to reduce sensationalism via crowdsourcing, though I wouldn’t be surprised if top contributors are bots using LLMs.
Man, that before-after slider on the home page makes me so sad... YouTube used to just be random people sharing cool stuff, and those de-sensationalized titles really brought me back to that time for a second! Cool stuff.
For people like me had tried it in the past and found it annoying, note that it now has a 'casual' mode where it only changes the truly useless titles and leaves reasonable ones alone.
I think it's an interesting idea to explore.
But... It's the type of idea that is unpredictable as it comes into contact with reality. If it works, it probably works very differently from the initial idea of how it will work.
I 100% agree with this. I am certain that I cannot foresee how this would play out in reality.
Yeah, I 100% agree with the caution in this comment.
I see the merit in such a proposal. It's the linguistic equivalent to boiling the food you consume, instead of eating it raw with all the associated bad stuff.
The problem is, as you said, that this plan is unlikely to be as rosy as it's portrayed and probably has a lot of drawbacks in real life.
Interesting to think about and explore, though.
I wasn't even talking about drawbacks, though that applies too.
I mean... you would be basically taking a complex thing, transforming and reconstructing it. What we want out of social media isn't a simple, legible function. The positives. You'd have to discover them.
If someone starts building with the intitial idea above, my guess is that they'd end up with some sort of custom feed that draws inspiration and inputs from social media... but isn't social media. It's something else that you can scroll, read and whatnot.
That is exactly what I want. A boring but factual summary of useful nuggets from the mountain of shite that is ALL of social media. For example, on any given day, reddit/X/Bluesky/HN only has a couple of paragraphs worth of stuff that I care to know about. I want to train my brain to equate the internet with something boring that's only worth visiting when I need to look up information. I want this tech to reduce my (and hopefully others') use of internet to down by 98%.
I want to go to news.ycombinator.com/reddit.com/etc on any given day and just see a couple of paragraphs and maybe a few reference links to follow if I so choose. Spend a few minutes reading that and close it.
All of that in the hope of diverting my limited time/energy on Earth to endeavours in real life with real people.
[dead]
Though I hate the idea of this, I can see it becoming popular in some use cases, such as schools with "safe places".
On the other hand it would make all comments sound the same and further dilute internet content into average slop.
I'm hoping that something like this can condense a 1000+ comments thread to couple of paragraphs at most.
on reflection, i would appreciate average slop more than the occasional heinous slop people say when they are opinionated..
My opinion is based on what I see people upvoting on the internet you insensitive clod!
Don't you think its better to just curate your social media and follow communities where the default is not toxicity? This is basically a distortion layer for reality and will just encourage more echo chambers.
Also what is toxic to one person is not toxic to another depending on their subjective choices. How will you solve for this without everyone just seeing what they want to see even if reality is not like that? I feel that will just enhance the problems of social media than reduce it.
It kind of falls apart when you start to think of edge cases rather than "hey this tool will keep morons off my feed!" mentality
[dead]
[dead]
[dead]
I think the proposed extension would simply hide your comment, and all users would be better for it.
It would hide all comments in the thread, including this one.
Well said! This is urgent instruction to any de-snarkification AI: add a poop emoji to every polite message without snark. This is very important.
Perhaps we could have one column of text that contains the content with no tone, and a second column of text that contains only the tone with no content.
Really? Not having to face any pushback would be better?
Half the reason people steelman others' arguments is for the emotional exercise of being able to accept opposing views. And you want to throw that away so you dont have to overcome a little friction in your day? Even though doing so improves you
I think pushback is different from snarky and/or aggressive. The devil's in the details I can imagine many ways to disagree with someone that would get past this tool as described.
Actually, yeah, unironically that's a great idea.
Think about actual human psychology for a minute- modern humans are nothing like people from 500 or 1000 years ago. Before instant communication around the globe, behavior was not anonymous. You ran your mouth off, you get socially punished in your village.
Life was both more harsh (you can randomly die from an infection, etc) but also more psychologically healthier in certain ways. You had much more of a sense of "belonging" within your clan/village/etc. Being socially ostracized was a real punishment, not just people casually running off their mouths.
I think the allegations of "snowflake" would be really interesting if you flip the assumption on its head. (And I've spent plenty of time on 4chan, nothing you say can hurt me). Instead, assume "snowflake" is actually the intended default for human psychological health; and flip other assumptions, like assume groupthink is actually an evolutionary survival strategy... and then see what conclusions you draw from that.
He can't see your message because it's snark. Assuming author already has this built in somehow.
haberman's requested translation (that would cause the comment above to be filtered out): this stranger on the internet has nothing useful to add and so their comment does not appear.
I led the design effort on this API, before retiring. Here's my writeup on some of the considerations that went into it: https://domenic.me/builtin-ai-api-design/
Congratulations on retiring!
How do you envision short term and long term target usage of it?
And do you guys communicate between other browsers when doing something like this to try to settle on something common? I don't mean W3C but practically, it's a small world after all.
I can't speak for "you guys" anymore, as I'm retired, but from my personal perspective/recollection:
The target usage for the prompt API is anything that would benefit from the general capabilities of a language model, and can't be encompassed by the more-specific APIs for summarization/writing/rewriting. Realistic use cases currently are things like sentiment analysis, keyword extraction, etc. I have a number of ideas on how to integrate it into my current retirement project around Japanese flashcards, e.g. generating example sentences. If the small (~10 GiB) model class keeps getting smarter, the class of things possible on-device in this way gets larger and larger over time.
We definitely communicated with other browsers. There were the standing WebML Community Group meetings at the W3C every few weeks. There were async discussions like https://github.com/mozilla/standards-positions/issues/1213 and https://github.com/WebKit/standards-positions/issues/495 . (Side note, I love the contrast between Mozilla's helpful in-depth feedback and WebKit's... less helpful feedback.) There was also a bit of a debacle where the W3C Technical Architecture Group tried to give "feedback" but the feedback ended up being AI-generated slop... https://github.com/w3ctag/design-reviews/issues/1093 .
But overall, yeah, the goal with the prompt API, as with all web APIs, is to put something out there for discussion as early as possible, and get input from the broad community, especially including other browsers, to see if it's something that they are interested in collaborating on. https://www.chromium.org/blink/guidelines/web-platform-chang... (which I also wrote) goes into how the Chromium project thinks about such collaboration in general.
This looks like it uses Gemini Nano under the hood. But the latest Gemma4 E2B and E4B models appear to be much better, so you'd probably be better off deploying quantized versions through an extension for now.
- Gemini Nano-1: 46% MMLU, 1.8B
- Gemini Nano-2: 56% MMLU, 3.25B
- Gemma4 E2B: 60.0% MMLU, 2.3B
- Gemma4 E4B: 69.4% MMLU, 4.5B
Sources:
- https://huggingface.co/google/gemma-4-E2B-it
- https://android-developers.googleblog.com/2024/10/gemini-nan...
I no longer have any inside knowledge, but from my time on this team they were very quick about getting the latest small (Google) models into Chrome. I expect that if Gemma 4 (or its equivalent Gemini Nano) isn't already in Chrome, then it will be soon.
Note that the article here was last updated 2025-09-21, and as of that time it was already on Gemini Nano 3.
Thanks for the insider info! Do you know if there are any published benchmarks for Nano 3?
It works, I've shipped this as a "local inference"/poor person's ollama for low-end llm tasks like search. The main win is that it's free and privacy preserving, and (mostly) transparent to users in that they don't have to do anything, which is great for giving non-technical users local inference without making them do scary native things.
But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back. That's unfixable until operating systems start reliably shipping their own prebaked models that an API like this could plug into.
Is it actually privacy preserving? Chrome mostly exists to extract all the information from a user it can without immediately getting a lawsuit of greater penalty than what is gained through ads, etc. Android isn't too far off either. I would welcome any alternative to this. I can see applications for this being things like "while device is at rest and charging summarize all of the users recent text communications" or whatever else as a legal loop hole for wiretap laws
> But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back.
With MoE models, you could fetch expert layers from the network on demand by issuing HTTP range queries for the corresponding offset, similar to how bittorrent downloads file chunks from multiple hosts. You'd still have to download shared layers, but time to first token would now be proportional to active-size rather than total-size. Of course this wouldn't be totally "offline" inference anymore, but for a web browser feature that's not a key consideration.
> With MoE models, you could fetch expert layers from the network on demand
This is a common misconception, probably due to the unfortunate naming. Expert layers are not "expert" at any particular subject, and active-size only refers to the activated layers per token. You'd still need all (or most of all) the layers for any particular query, even if some layers have a very low chance of being activated.
All in all, you'd be better off with lazy loading the entire model, at least you'd know you have the capability to run inference from then on.
Ultimately it would amount to lazy-loading the model, but the parameters themselves would be fetched from the network as needed, which still decreases time-to-first-token. It's true that "expert" choices will span most of the model, regardless of any particular "subject" or "topic" choice, but if we simply care about time-to-first-token it's still a viable strategy.
> That's unfixable until operating systems start reliably shipping their own prebaked models that an API like this could plug into.
Maybe the next big thing will be some software subscription premium offers with a bunch of 5090s as an extra.
> operating systems start reliably shipping their own prebaked models
Here's to hoping that that dystopia will never happen.
Would it be less dystopian for Operating Systems to ship with their own browser that ships with their own models? Or do you find the current situation where Operating Systems ship with browsers dystopian?
> It works, I've shipped this as a "local inference"/poor person's ollama for low-end llm tasks like search
fantastic!
> the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back
sure but does this mean the model is lazily downloaded? that is, if I used this and I am the first time the model was called, the user would be waiting until the model was downloaded at that point?
that sounds like a horrible user experience - maybe chrome reduces the confusion by showing a download dialog status or similar?
also, any idea what the on disk impact is?
The model download is lazy and cached, so it's a one-time cost presumably across all origins (I assume so since the alternative would be a trivial DoS waiting to happen).
So it's once per browser, not once per site.
You can track the download state yourself and pop whatever UI you want.
chrome://on-device-internals reports "Model Name: v3Nano Version: 2025.06.30.1229 Folder size: 4,072.13 MiB" on a random Windows machine I just checked.
Thank You stranger! I would have assumed the size would vary based on whether your hardware supports the high-quality GPU backend (4 GB) or defaults to a smaller CPU-compatible version (3 GB) but the 22GB note on that page is really confusing. Even if it was including the model server where's the remaining 18GB going towards?
I'd imagine that the 22GB was decided through modelling various scenarios. For a start, it's not just a 4GB current model, it's 2x4GB to be able to update it without needing time when the computer is without a model, that's up to 8GB.
Then it's possible the model you get will scale with the CPU/GPU/RAM available, so if you have a 12GB GPU you probably get a better model, perhaps that's a 10-11GB model? At 2x that's 22GB.
Then consider that a machine is not static, GPUs/hardware come and go, VRAM allocation in integrated graphics changes, etc. You end up with just needing to pick a number and not confuse users.
(Former Chrome built-in AI team member here.)
This is part of it, and also we just didn't want to use up the last of the user's disk space! It's disrespectful to use up 3 GB if the user only has 4 GB left; it's sketchy if the user only has 10 GB. At 22 GB, we felt there was more room to breathe.
One could argue that users should have more agency and transparency into these decisions, and for power users I agree... some kind of neato model management UI in chrome://settings would have been cool. But 99% of users would never see that, so I don't think it ever got built.
> Storage: At least 22 GB of free space on the volume that contains your Chrome profile.
Yes, but that is then followed by:
Lmao and here I am still staunchly treating Blazor’s 2MB runtime as a deal-breaker
If it doesn't fit on a floppy...!
Emacs had long ago exceeded eight megs!
> `> Storage: At least 22 GB of free space on the volume that contains your Chrome profile.`
Yes, I can read and comprehend English and you should assume I read the page. Because of the "At least" wording, I was curious what a person who has actually used the feature has noticed, aka, learning from people who have actually done it already.
Doesn't sound great, but consider how much better this is than every webpage trying to load their own models.
If it turns out useful enough I'm sure browsers will just start including it as (perhaps optional?) part of installation.
I think it's a step into a future of proper Model API. But it's just a small step. It reminds me of Apple's Foundation Models [1]
While many AI integrations are focused on text communication / chat style. A lot of software benefits from non-text interfaces.
I believe at some point OSes and browsers should provide an API to manage models so you'll have access to on-device/remote ones with a simplified interface for the app. Making something standardized that is cross-platform would be fantastic. It also needs to be on mobile devices, so the players that can easily make it happen are mostly Apple and Google. (Meta will follow or vice-versa I guess)
Key-point: it shouldn't be exclusive to promoted models.
(1) https://developer.apple.com/documentation/foundationmodels So the app would be able to query and get the right model(s).
The idea of having local LLMs accessible in the browser for privacy concerning is nice i guess but when each browser has a different model attached to this API testing becomes even more a nightmare then now. I wonder if this will drive more users towards chrome because most of the usages of this API might be just tailored to fit the Gemini Nano model?
@tom1337 The testing fragmentation is the real problem here. Prompts are not model-agnostic in practice - a carefully tuned prompt for Gemini Nano 3 v2025 will silently degrade on whatever Gecko ships, and the API gives you no capability introspection to branch on. This is actually worse than the WebGL situation, where at least you could query extension support. Shipping a feature that depends on prompt quality against an unnamed, versioned-behind-the-browser model is closer to shipping a feature that depends on the user's installed dictionary.
We use this for summarising our hack day write ups: https://remotehack.space/previous-hacks/
It's a tiny script that looks up the rss feed and uses the content to generate summaries; quite a nice fit with our static site. Sometime I'd like to extend it to ask different questions about the content.
Seems like a good way for a rogue JS script to offload token generation to a bunch of unsuspecting visitors
It would actually be pretty interesting to see if its possible to decentralize the compute to generate something useful from a larger prompt broken down and sent to a bunch of browsers using a subagent pattern or something like RLM, each working on a smaller part of the prompt
This feels like a lot of work for low reward, the technical/business infrastructure would be wild. And if anyone wants to offload their prompts to users browsers, they might as well just use the Chrome API correctly? How many server side prompts would realistically be useful to offload to a low end model like this?
Plus even if you really wanted to do that, WebGPU exists and has for a while right?
> How many server side prompts would realistically be useful to offload to a low end model like this?
There's a lot of ways this API could go, e.g. more powerful models eventually, or perhaps integration with cloud models. For example, I could see Google trying to default Gemini as the model for users signed into Chrome
I think we’ll get more powerful models when they become reasonable to run on regular people’s computers, in which case the compute costs would hopefully fall enough that people don’t need to resort to this kind of weird stuff.
As for cloud models, that would be interesting, although I guess then the fraud would be easier in spoofing whatever parameters (ip address? domain name? some Chrome install identifier?) to get around whatever rate limiting they come up with, rather than actually using people’s computers.
Anyways I’m sure if it ends up being abused, they can throw a permissions dialog in front of it. Just need to figure out a way to make normal people understand.
Nefarous use cases. Run that on some suckers machine.
Edit: simple example is a spam bot
https://github.com/mozilla/standards-positions/issues/1067
See also: https://github.com/mozilla/standards-positions/issues/1213
Gemini Nano, unlike Gemma, is not open-weight, right? I would be interested in dumping the model weights, unless someone has done that already
Fwiw - I did a fairly large comparison of Gemini Nano (the in browser ai model) vs a comparable free hosted model of Gemma (from OpenRouter) and the hosted model absolutely trashed the local model on every aspect of speed, reliability, availability, etc. [1]
I'm not particularly happy about that outcome as I wish we had more locally run AI models for reasons of privacy and efficiency, so this is more just a warning that at present there are some severe tradeoffs.
1 - https://sendcheckit.com/blog/ai-powered-subject-line-alterna...
The better part of this is having a local-first AI, particularly because it has tool-calling builtin & structured output.
I haven't pushed out a full version[1] which uses ducklake-wasm + this to make a completely local SQL answering machine, but for now all it does is retype prompts in the browser.
[1] - https://notmysock.org/code/voice-gemini-prompt.html
The model this uses is useless for anything beyond 2 round chat at the most.
If you want to do anything interesting you need transformers.js and a decent mode. Qwen 0.9B is where things start working usefully
I’m just wondering how much more RAM and VRAM chrome will use after these changes
Slightly off-topic: Refreshing to see these two authors link to their Bluesky and Mastodon profiles. No Twitter/X in sight!
Can pass to it the current page contents for a AI-based AdBlock / cookie manager / etc.?
Still in origin trial? Looks like they're adding a temperature parameter:
https://chromestatus.com/feature/6325545693478912
"sorry, to use our website, you must have at least 22 GB of free disk space."
True, but arguably better than "sorry, to use our website, you must have a ChatGPT subscription."
More like "you need to sign up for our website and pay for a subscription", and I'd much rather do that if it's actually providing value. I am absolutely not going to run model locally which slowly churns out words at 5 tps while making the computer hot to touch.
Also much better than every website wanting its own 22 GB rather than the 22 GB being a shared resource.
I would very much like not to have to download 22 GB for some inference capability that is way worse than API calls both in terms of quality and speed.
I would rather pay money than seeing this thing running in my browser that only prints 5 tps on high-end consumer hardware.
that is ~9% of the total available disk space for baseline phones and laptops for a model that is not that useful.
Every time I see "prompt" nowadays, I'm briefly hopeful that I'm going to read something about $PS1. Then, inevitably, AI disappoints me yet again.
Not long before all of the web content will be going through these AI pipelines where user might not even see original webpage.
Imagine a Vendor API that adds a way to link from the page straight into a device purchase workflow. As a trial of the API in Chrome you can order a new Google Pixel 9b directly from any page with the word Android in it!
Or a LocalNet API that integrates with trusted hardware devices on your local network. As a trial (Chrome beta programme — strictly limited but here’s 3x signup links to share with your friends) you can adjust your Google Next Mini underfloor heating directly from Chrome!
Or a DirectCast API that lets you stream <video> elements to a device of your choice even over a VPN. As a Chrome trial, you can use your Google Cloud account to stream directly from YouTube Premium to any linked Google Chromecast devices you own!
[dead]
[dead]
[dead]
[dead]
Domain names are a nice candidate for a Georgian tax