I'm going to chime in here, I think 1. This is great and Mozilla is listening to it's core fans and 2. I want Firefox to be a competitive browser. Without AI enabled features + agent mode being first class citizens, this will be a non-starter in 2 years.
I want my non-tech family members/friends to install Firefox not because I come over at Christmas, but because they want to. Because it's a browser that "just works." We can't have this if Firefox stays in the pre-ai era.
I know Mozilla doesn't have much good will right now, but hopefully with the exec shakeup, they will right the ship on making FF a great browser. While still staying the best foil to Chrome (both in browser engine, browser chrome, and extension ecosystem).
Fully disagree. I use zero so-called "AI" features in my day to day life. None. Not one. Why do I need them in my browser, and why does my browser need to focus on something that, several years into the hype wave, I still *do not use at all*? And it's not for a lack of trying, the results are just not what I need or want, and traditional browsing (and search engines, etc.) does do what I want.
I'd be elated if Firefox solely focused on "the pre-AI era", as you put it, and many other power users would, too. And I somehow doubt my non-techie family cares - if anything, they're tired of seeing the stupid sparkle icons crammed down their throats at every single corner of the world now.
There are many features you are not using in all your software. Just being there, should not be a problem for people. You should evaluate a software by what it's giving you, and which harm it brings, not by what it's giving others you do not care about.
And so far, we can assume that AI in Firefox will be like all the other stuff people don't care about, just optional, a button here, a menu-entry there, just waiting for interaction, but not harmful.
I agree, why support pushing the masses into another big tech machinery that just rips off their data and collectively makes it worse for all of us again? We are already way too cool with people frying their brains on X, TikTok, Instagram and whatnot. If anything, as devs, we should help people get back to focus on their own lifes over monetization of attentionspans. But this industry has no backbone and is constantly letting people down for a quick buck.
You never translate any websites?
AI tools are here to stay. They will start to creep into everything, everywhere, all the time. Either you recognize the moment at which it becomes a significant disadvantage not to use them (I agree that moment is not now), or get left behind.
The metaverse is here to stay! Blockchain is the future!
Without integrating metaverse and blockchain features into Firefox, Mozilla is at a significant disadvantage compared to other browsers. Don't get left behind!
They did actually jump on metaverse with Firefox reality and Mozilla hubs. Both weren't bad products at all. Both are now cancelled and they have done basically nothing for Mozilla's market position.
Edit: so I mean I agree here in case that wasn't clear
Many thing are "here to stay", should Mozilla also implement a "share with TikTok" functionality into their browser?
> or get left behind.
Last time I heard this phrase it was about VR, and before that it was NFTs. I wished the tech community wasn't so susceptible to FOMO sentiments.
Indeed. I never understood, let alone bought into, the NFT hype, but I think VR is a good reference point for AI:
There was a real, genuine product in the Oculus Rift. It did something that was an incremental improvement over the previous state of the art which enabled new consumer experiences for low cost.
The Metaverse was laughable, and VR got glued to a lot of things where it added zero value, or worse negative value, for example my attempts to watch pre-recorded 3D video gave me nausea because the camera can only rotate, not displace, with my head movements.
Compare and contrast with AI:
LLMs and Diffusion models are also real, genuine products, that are incremental improvement over the previous state of the art which enabled new consumer experiences for low cost.
A lot of the attempts to integrate these AI have been laughable, and have added zero-to-negative value.
Non corporate VR is actually doing some interesting things - but yeah, what Meta did with it was pure garbage.
I didn't mean it as VR being useless - I'm sure it can be useful for some applications or fun for gaming - my point was that you shouldn't fear getting left behind just for not having an Apple Vision Pro app or a land in the Metaverse :)
Another way to see this: Hammers can be useful, the Internet can be useful, but this doesn't mean that as a hammer manufacturer you should make your next hammer an IoT product ASAP or you will be left behind.
Well stated, agreed. :)
Just wanted to note that even after the bad publicity that companies like Meta (ugly avatars, unusable bland virtual spaces) or Apple (overpriced device with no software or content) have given to VR, some people tend to regard it as dead even though there is quite a vibrant user and creator community doing some incredible things (even just what people do with VRChat is amazing!). And there are even companies that seem to get it (Valve).
I don't think it's quite that simple. A great deal of work has nothing to do with computers, and even more human activity has nothing to do with economic advantage. The scope of your statement is a bit too broad in that regard but for computer based work I think you are a) more or less right but b) if you are right it's not clear how much economic benefit LLMs will actually provide on balance, long term.
Does it make the world a better place, and more prosperous? Does it just move economic activity around a bit in regards to who is doing what? We'll find out in ten years when the retrospective economic studies are done.
People wonder why there's a backlash when the pro-AI side sounds like the Borg.
I disagree and I think the moment is now. Gemini 2.5 and now 3.0 is incredible. People that don’t recognize that and use AI tools now are as silly as a craftsman that uses a hammer as a screwdriver when he has a screwdriver in his toolbox. A good craftsman uses the right tool for the job to save time and do a better job and knows the limitations of each tool.
I can spend hours learning photoshop and then trying out color schemes for my new intricately detailed historic house or removing a car from the driveway or I can use Nano Banana and be done in a prompt. There is dignity in learning all that minutiae but I don’t care, I’m not a Photoshop artist, I just want the result and to just move on with my life and get the house painted.
AI crap has already been crammed into everything for months now, and nobody like it nor wants it. There is no proof that AI will continue to improve and no certainty that it will become a disadvantage not to use them. In fact, we are seeing the improvements slow down and it looks like the model will plateau sooner rather then later.
> There is no proof that AI will continue to improve and no certainty that it will become a disadvantage not to use them. In fact, we are seeing the improvements slow down and it looks like the model will plateau sooner rather then later.
While I expect the improvements to slow down and stop, due to the money running out, there's definitely evidence that the models can keep improving until that point.
"Sooner or later", given the trend lines, is sill enough to do to SWEng what Wikipedia did to Encyclopædia Britannica: Still exists, but utterly changed.
> While I expect the improvements to slow down and stop, due to the money running out
This will certainly happen with the models that use weird proprietary licenses, which people only contribute to if they're being paid, but open ones can continue beyond that point.
The hyperscalers are buying enough compute and energy to distort the market, enough money thrown at them to distort the US economy.
Open models, even if 100% of internet users joined an AI@HOME kind of project, don't have enough resources to do that.
You are right, machine learning models usually improve with more data and more parameters. Open model will never have enough resources to reach a compatible quality.
However, this technology is not magic, it is still just statistics, and during inference it looks very much like your run of the mill curve fitting (just in a billion parameter space). An inherent problem in regression analysis is that at some point you have too many parameters, and you are actually fitting the random errors in your sample (called overfitting). I think this puts an upper limit on the capabilities of LLMs, just like it does for the rest of our known tools of statistics.
There is a way to prevent that though, you can reduce the number of parameters and train a specialized model. I would actually argue that this is the future of AI algorithms and LLMs are kind of a dead end with usefulness limited to entertainment (as a very expensive toy). And that current applications of LLMs will be replaced by specialized models with fewer parameters, and hence much much much cheaper to train and run. Specialized models predate LLMs and we have a good idea of how an open source model fares in that market.
And it turns out, open source specialized models have proven them selves quite nicely actually. In go we have KataGo which is one of the best models on the market. Similarly in chess we have Stockfish.
> I use zero so-called "AI" features in my day to day life. None. Not one.
I know so many people who made that same argument, if you can call it that, about smartphones.
I recently listened to a podcast (probably The Verge) talking about how an author was suddenly getting more purchases from his personal website. He attributed it to AI chatbots giving his personal website as the best place to buy rather than Amazon, etc. An AI browser might be a way to take power away from all the big players.
> And it's not for a lack of trying, the results are just not what I need or want, and traditional browsing (and search engines, etc.) does do what I want.
I suspect I only Google for about 1/4 of things I used to (maybe less). Why search, wade through dubious results, etc when you can just instantly get the result you want in the format you want it?
While I am a techie and I do use Firefox -- that's not a growing niche. I think AI will become spectacularly better for non-techies because it can simply give them what they ask for. LLMs have solved the natural language query issue.
> I know so many people who made that same argument, if you can call it that, about smartphones.
Sure, but people also told me I'd be using crypto for everything now and (at least for me) it has faded into total obscurity.
The biggest difference for me is that nobody (the companies making things, the companies I worked for...) had to jam smartphones down my throat. It made my life better so I went out of my way to use it. If you took it away, I would be sad.
I haven't had that moment yet for any AI product / feature.
Any AI product I pay for is great. Any AI product I don't pay for is terrible.
> Any AI product I pay for is great. Any AI product I don't pay for is terrible.
This doesn't sound like the "free sample" model is working then? If I try the free version of product X and it's terrible, that will discourage me from ever trying the paid version.
I think half the people who think AI is incredibly dumb and can't understand why anyone is using it is because they're using the free samples. This whole thing is so horribly expensive that they lose money even on people who pay therefore the free samples are necessarily as minimal as they can get away with.
The free samples worked famously initially to get people to try it initially, though.
But whenever that free Gemini text pops up in my search, I know why people think it's stupid. But that's not the experience I have with paid options.
> Why ... wade through dubious results, etc when you can just instantly get the result you want in the format you want it?
Funnily enough, this is exactly how I justify Googling stuff instead of asking Gemini. Different strokes I guess!
> > I use zero so-called "AI" features in my day to day life. None. Not one.
> I know so many people who made that same argument, if you can call it that, about smartphones.
I had to use a ledger database at work for audit trails because they were hotness. I think we were one of the few that actually used AWS QLDB.
The experience I've had with people submitting AI generated code has been poor. Poor performing code, poor quality code using deprecated methods and overly complex functionality, and then poor understanding of why the various models chose to do it that way.
I've not actually seen a selling point for me, and "because Google is enshittifying its searches" is pretty weak.
I've been posting recently how I refactored a few different code bases with the help of AI. Faster code, higher quality code, overall smaller. AI is not a hammer, it's a Lathe: incredibly powerful but only if you understand exactly what you're doing otherwise it will happily make a big mess.
if you have to understand exactly what you're doing, why not just... do it?
That question completely misunderstands what AI is for. Why would I just do it when the AI did it for me in less time that I could myself and mechanically in a way that is arguably harder for a human to do? AI is surprisingly good at identifying all the edge cases.
i probably don't understand. the main thought i have re: llm coding is, why i would want to talk to a insipid, pandering chatbot instead of having fun writing code?
but, as an engineer, i have to say if it works for you and you're getting quality output, then go for it. it's just not for me.
It seems to me you're coming in with a negative preconceptions (e.g. "insipid, pandering chatbot"). What part about coding is fun for you? What part is boring? Keep the fun bits and take the boring bits and have the LLM do those.
> Faster code, higher quality code, overall smaller.
I'll have to take your word for it, I have yet to see a PR that used AI that wasn't slop.
> AI is not a hammer, it's a Lathe
I would liken it more to dynamite.
> I'll have to take your word for it, I have yet to see a PR that used AI that wasn't slop.
How would you know a non-slop PR didn't use AI?
Why would I accept slop out of the AI? I don't. So I don't have any.
I don't understand the disconnect here. Some people really want to be extremely negative about this pretty amazing technology while the rest of us have just incorporated it into our workflow.
> How would you know a non-slop PR didn't use AI?
I don't, hence why I have to take your word for it.
The PRs that people have submitted where they either told me up front or admitted to using AI after the review probed as to why they would be inconsistent in their library usage were not good and required substantial rework.
Yes, some people may submit PRs that used AI and were good. But if so, they haven't told me but I would have hoped that people advocating it would have either told me or got me to review, said it is good, and then told me it was a test and the AI passed. So far that hasn't happened, so I'm not convinced it's a regular occurrence.
Maybe the problem with understanding the benefits of AI is that you are relying on other people to use AI properly. As the direct user myself, I don't have that problem.
I'm using it to make things better rather than just producing. Even just putting it in agent mode and saying "look at all my code and tell me where I can make it better" is an interesting exercise. Some suggestions I take, some I don't.
> Why search, wade through dubious results, etc when you can just instantly get the result you want in the format you want it?
For one, that way you can see that the source is dubious. Gemini gives it to you cleaned. And then you still have to dig through the sources to confirm that what it gave you is correct and not halucinated.
> Because it's a browser that "just works." We can't have this if Firefox stays in the pre-ai era.
Strongly disagree.
Theres no expectation of AI as a core browsing experience. There isnt even really an expectation of AI as part of an extended browsing experience. We cant even predict reliably what AI's relationship to browsing will be if it is even to exist. Mozilla could reliably wait 24 months and follow if features are actually in demand and being used.
Firefox can absolutely maintain "It just works" by being a good platform with well tested in demand features.
What they are talking about here, are opt out only experiments intruding on the core browsing experience. Thats the opposite of "It Just Works".
>I know Mozilla doesn't have much good will right now, but hopefully with the exec shakeup, they will right the ship on making FF a great browser.
Its already a great browser. It doesnt need a built in opt out AI experience to become great.
There was also no expectation of process isolation in Mozilla Firefox when Google Chrome first came into the scenes. Electrolysis was painful for Mozilla and yet it was necessary.
So instead of being flexible enough to adapt to new requirements as users demand them, they are blindly implementing things before they are requested just in case?
Believe it or not well-intentioned developers, product managers, etc can read the writing on the wall and see where user expectations are heading based on the apps and products they already use.
Exactly why I am baffled. You would think they could read the writing on the wall.
I don't like it, but ChatGPT is a product that nearly a billion people are using. It's broken into popular culture. My mom, who has trouble sending an email, uses it. She found it on her own.
More importantly, generative AI is incredibly popular with younger cohorts. They will grow up to be your customer base if they aren't already. Their expectations are being set now.
Again, I don't like it, but that's the reality.
Quoting myself from another thread.
> I love it. I love going to the AI place and knowingly consulting the AI for tasks I want the AI to perform. That relationship is healthy and responsible. It doesnt need to be in everything else. Its like those old jokes about how inventions are just <existing invention> + <digital clock>.
> I dont need AI on the desktop, in microsoft office, replying to me on facebook, responding to my google searches AND doing shit in my browser. One of these would be too much, because I can just access the AI I want to speak to whenever I want it. Any 2 of these is such substantial overkill. Why do we have all of them? Justify it. Is there a user story where a user was trying to complete a task but lacked 97% accurate information from 5 different sources to complete the task?
Being against the random inclusion of AI in the browser, isnt the same as being against AI completely. It needs to justify its presence.
Video games are incredibly popular and my mom plays them, does that mean Firefox should have video games baked in at the base layer?
Firefox needs to immediately build Candy Crush into the browser. Users expect to be able to access Candy Crush and only at the layer of web browser can such a thing be implemented.
In a world where people expect games in their browsers, sure.
Co-worker was talking about how he tried to make invitation card with chatgpt, just a picture of his house and text and AI failed to do it. It said he didn't have copyright to the picture and used another random pic, layout was wrong etc. Then younger co-worker gave tips how to do it, what tools to use and offered to make it with his better AI program.
What could be done in few minutes with a free program is now multiple hours with billion dollar AI tools and you have less control what the end result is.
Obviously your co-worker was not able to do it in a few minutes with a free program, or he would just have done it this way.
+ Children are growing up with ChatGPT and Gemini. It has already become the de facto standard for learning. AI in browsers is inevitable.
"Children are growing up with ChatGPT and Gemini"
Yes.
"It has already become the de facto standard for learning."
Maybe.
"AI in browsers is inevitable."
Why. How does that follow. It seems like ChatGPT and Gemini are already working fine, what does the integration add?
And assuming people want deeper integration is the browser even the right level of abstraction? Arguably it would be better to have something that was operating at the OS level, like siri/gemini assistant style.
When Microsoft completely integrates its LLM into Windows, would you rather give that access to your browser, or would you rather plug in your own local model / turn it off entirely while browsing?
If a global LLM becomes standard, I'd want to plug in my own local model or disable it entirely, but I don't think Microsoft nor Apple are going to open up their operating systems and make it easy to do that any time soon. The option to granularly use your own models is a plus to me in that situation.
Every app has to open itself for integration, especially if it's not a native app like Firefox. From where they get the AI at the end doesn't really matter, they will support them all anyway.
Precisely. Like the winner could be in 100 spaces, but more likely going to be something global.
Filling out forms, booking tickets, summarizing content ...
Even at work, have seen few junior developers use AI browsers to attend mandatory compliance courses and complete quizzes. Not necessarily a good thing but AI browsers may win in the end and it might be too late for Firefox.
?????
Why does the existance of an AI chat box website mean a browser must do more than take you to that website?
The forceful inclusion of LLMs in places that have no value are simultaneously ubiquitous and obnoxious.
Because the chatbox can't access other websites, doing its work there. That's what integration is all about, to connect parts.
"why do I have to go and fill with copy paste that form or navigate through that page to do $something if that AI browser can do it for me?"
And in that scenario, there is a GIGANTIC need for a user-first, privacy-respecting browser using ideally local models (in a few years, when HW is ready)
Again: ???????
You people need to be forced to use your product in the exact form your product is presented to end users. With the exact frequency it's presented to end users. In all the wrong places as it is presented to end users.
Maybe then you'll understand why shoving AI in every conceivable crevice is incredibly obnoxious and distracting and, most importantly, not useful.
Shoving an AI agent in every website is distracting and not that useful. Shoving an AI agent in every app is distracting as well.
Having one global AI agent per operating system or browser (where most of the digital life happens, in the case of desktop browsers), for the people that want to have an AI agent, it's probably going to be useful, if well implemented.
OS might make sense, but the browser level is a weird middle space for it.
I know, but at the end of the day most people nowadays do the vast majority of their job in a browser, and there is already a well defined API to manage its content. Also browsers are coming there faster and at some point it will become what people expect, rather what's most optimal.
Last I checked Firefox was sitting at 4% browser market share. If you include Brave you just get to 5%[0].
So truth is that privacy isn't enough to get people to switch. 5% share isn't enough to stay alive and protect privacy.
This is the job of every engineer. Your job is to understand the product and where it advances and where it can help the users. The users don't know the technical side. They barely know what they want. Yes, you should listen to users but you also have to read between the lines to actually figure out what they want. Frankly, the truth of the matter is that what people say they want is very different than what they actually want.Work with customers and you'll experience this first hand... I thought it was a big enough meme that everyone knew this
Speaking about reading between the lines, the privacy community is not very good at advocating for privacy. Look at Signal, it has similar backlash to Mozilla. The community shoots itself in the foot because the products are not perfect. But here's the thing, both Signal and Firefox are not products intended to maximize privacy at all costs. They are products to maximize privacy while being appealing to the masses. Are there more secure and private solutions out there? Hell yeah. But are those tools practical for the masses? Don't fool yourselves.
So stop with this bullshit, you're shooting yourself in the foot. You don't have to use Firefox to root for them. Go use a fork like the Mullvad browser or Waterfox. If you're a power user then just be a fucking power user. I use Arch but that doesn't mean I'm going to piss on Ubuntu every chance I get. I fucking hate Ubuntu but I'm going to root for them because every new Ubuntu user is one less for Microsoft and Apple and every new Ubuntu user is a potential new <literally any other distro is better> user. So why get angry because someone is making a step in the right direction? So what if their legs aren't long enough to get all the way to where you are (which didn't take one step either!).
So let's be very clear about this. I'm not mad at you because I want AI in Firefox (I don't), I'm mad at you because you're attacking our literal last line of defense for a secure and private internet. I'm mad at you for purity testing. Stop with this "no true Scotsman" bullshit. We can have those arguments at a later date when Firefox isn't on its last leg and/or when we have a diverse choice in browsers. But at this point *all you are doing is advocating for Chrome*. Whether you realize it our not. We've been playing this fucking game for a decade now and you can either look at the results or continue to ignore them. But it didn't work, so we need to try something else.
[0] https://radar.cloudflare.com/reports/browser-market-share-20...
>This is the job of every engineer.
No its not. Engineering is about right sizing the product. This is not that. Theres no user story, theres no pressing demand. Every CTO in the world might be racing to force AI into their products regardless of utility but there's no reason to pretend this is being done for good engineering reasons.
>Work with customers and you'll experience this first hand.
Theres no customer benefit to shoving AI in every application at every layer. This is not about the customer. This is about a race to cram the feature in every conceivable space and see where it sticks. This is corporate and has no sense of good engineering. They also don't want it. What a combo. No utility and no demand. If anything its a bit like the story of fish fingers, where the pressing need, was the big warehouse full of unwanted fish bits that they wanted to move, and the innovation was productising it in such a way that people would actually purchase and consume it. In this case we have DC's full of AI cards that desperately want a market. It might be uncharitable, but I do wonder if the Mozilla Foundation has been promised some financial reward if they solve this issue.
There has not been any demonstrable requirements gathering for this change. An executive directed this, and to pretend otherwise is insane.
>Speaking about reading between the lines, the privacy community is not very good at advocating for privacy.
No they aren't very good at it at all, but that's a massive non sequitur.
>So stop with this bullshit, you're shooting yourself in the foot.
No, defending Firefox from valid criticism is the self inflicted injury.
>I use Arch but that doesn't mean I'm going to piss on Ubuntu every chance I get.
Ok, but I would think it fair and reasonable to criticise Ubuntu if they decided to randomly cram an opt out LLM into the distro, and I think your criticism of them also deserves to be heard. You dont need to be the Ubuntu or Firefox internet defense force.
>So why get angry because someone is making a step in the right direction?
I haven't been angry at a single firefox user here, I would ask you to stop making things up just to be angry about. There's not an ounce of "Boycott" or anything in my posts. I am writing this from firefox. I am permitted, to be critical of the browser I am using.
>I'm mad at you because you're attacking our literal last line of defense for a secure and private internet.
"Attacking". Its clearly necessary criticism. The devaluing of the product is coming from inside the house. Their chief rival is, critically, releasing a separate browser to test their AI features in. For Chrome itself Gemini is in the extension store. It is OPT IN, not OPT OUT. https://chromewebstore.google.com/detail/gemini-for-chrome/a... Their chief rival is respecting end user consent better. If you want them to be a more popular browser, why don't you hold them to a better standard than Chrome, instead of policing the critics?
(Also boo for making me open chrome to check)
>Stop with this "no true Scotsman" bullshit.
I literally cannot identify a no true scotsman argument in my comments. Theres a difference between saying "No TRUE web browser would" and pointing out validly that there's no interest or demand in the feature being rolled out. If anything, the closest thing in this thread to a no true scotsman, while still failing it technically, is the idea that you cant be a true supporter of privacy while being critical of Mozilla.
>We can have those arguments at a later date when Firefox isn't on its last leg and/or when we have a diverse choice in browsers.
No now is a great time.
>But at this point all you are doing is advocating for Chrome.
No I am asking them to be competitive with chrome, and treat users that well or better.
>But it didn't work, so we need to try something else.
Enshittification isnt a plan.
Translate requires you to download the model for language pairs. That's opt-in.
The chatbots aren't chatbots, they're just a fucking shortcut to the 5th most popular website on the internet.
I hate to break it to you, but there's also a shortcut to the #1, #2, #4, #6, #7, #9, #10, and #13 most popular websites. It's the literal url bar... You can type "!w hacker news" to search wikipedia for hacker news.
Sorry, it is just as laughable to say firefox is shoving Wikipedia down your throat as it is to say they're shoving AI.
Do you realize how big an LLM is? Clearly you don't. The browser isn't going to fit on a lot of people's computers if they shove an LLM in.And hey, if you feel I'm wrong here go jump on a fork that isn't going to add those things like Mullvad or Waterfox. That's still supporting Firefox in the way of standing against Google while also making a clear signal that you don't want those features. Have your cake and eat it too, but I'm saying "Shut up with the talk that makes people switch to Chrome". We have to be honest with ourselves here. All this outrage at Mozilla for not being pure enough is just driving people to Chrome. That's why I'm calling all this fucking idiotic. It's a literal footgun. But don't listen to me look at what's happened in the past. Look at the comments here. Look at the comments in the past. FFS people were equating Mozilla accepting crypto donations with shipping a miner in the browser. It literally takes place in the Mastadon thread we're all talking about. Those things are wildly different and it is wildly a disingenuous interpretation.
So yeah, I'm going to keep calling this complaining idiotic and counterproductive. We've been grabbing our pitchforks for years every time Mozilla even slightly steps out of line, or even if we think they might! And for years their browser share has been siphoned off to Chrome or some painted up variant. So forgive me if I don't believe your actions align with the goals you claim. And forgive me if I cannot distinguish complaining from criticism, because as I've stated above, your evidence doesn't appear to be what you claim it is. Saying they're shipping LLMs is just as disingenuous as saying they shipped a crypto miner. It is such a grotesque mischaracterization that it is laughable.
>"Shut up with the talk that makes people switch to Chrome".
No this is silly and you should dispense with it. Theres no cow so sacred it cant be criticised.
>Do you realize how big an LLM is? Clearly you don't.
Yes I do, I have run qwen 4b locally.
>The browser isn't going to fit on a lot of people's computers if they shove an LLM in.
The point I was making. Anything down this track is going to cause resource headaches, browsers are already a resource headache.
>We've been grabbing our pitchforks for years every time Mozilla even slightly steps out of line
I have been cool with literally everything else they have done, and even spent time pointing out that the T&C changes were relatively normal practice. Lumping me in with every other criticism you don't like is a genetic fallacy.
>It's a literal footgun.
Its community feedback. They ignore it at their peril.
> But don't listen to me look at what's happened in the past. Look at the comments here. Look at the comments in the past. FFS people were equating Mozilla accepting crypto donations with shipping a miner in the browser. It literally takes place in the Mastadon thread we're all talking about. Those things are wildly different and it is wildly a disingenuous interpretation.
I dont see how you are equating these. My post isn't in the Mastodon thread.
>Saying they're shipping LLMs
You literally quoted my "if", not reading and understanding how that word modifies the language around it is on you.
Theres really 2 states until there's an OS level LLM layer.
1. Shipping a low parameter/ otherwise compressed LLM.
2. LLM features are non local, with all the headaches that entails.
> And hey, if you feel I'm wrong here go jump on a fork that isn't going to add those things like Mullvad or Waterfox
I am required as a matter of employment to use an approved web browser 9-5. We have strict rules against AI use, only permitting copilot with enterprise data protection. (Using copilot without that stupid green tick, and letting customer data into it, is an instant RGE, same with any other LLM)
Another leg of this, is that I have 1000 profiles on 1000 computers for quite a few different customers. The nature of my work does not permit a single profile that roams to every computer I use. I cannot cross the streams between multiple customers.
I am not going to be asked to go to every PC and server, and opt out of AI features. I am likely to be asked to completely remove Firefox as an approved browser, pull it off every machine, and push only Microsoft Edge going forward. Because perversely, it auto signs in to Copilot with EDP via 365, if EDP has been purchased. So it can (sadly) do whatever the fuck it wants.
We are super sensitive to supply chain stuff, so are unlikely to be permitted to use any fork. We recently dismissed an addon for a product that was developed, by one of the core developers of the product, on his own time as an extra feature. Why? The guy didnt spend a lot of time maintaining it and seemed overwhelmed with pull requests. I haven't looked at the Firefox forks, but I know our security posture and its probably a non starter.
Even the idea that an unapproved LLM could scoop up what I am browsing is going to poison the well.
And like I have repeatedly stated, I like firefox. I want to use firefox. Part of the reason I want to use firefox is that it has no LLM instead of the approved LLM that I detest.
Even if Firefox goes "aha, we will implement our service to connect to Copilot with EDP" it would have to do so in a fashion where it is completely seamless, and zero touch, with no failback to regular copilot or another service. I cant conceive of Microsoft being that friendly.
We aren't alone in this. Lots of organisations are making similar decisions. I speak to other service providers going through the same headaches. Our contracts make it our responsibility to actively prevent customer data exfiltration, which is a basic feature and requirement of a lot of LLM implementations. As you stated, the browser LLM will want the browser window context. Microsoft (sadly) understands this, and knows that users will desire path to an LLM if one isn't available, and they understand what words we need written into the service agreement to make EDP attractive. Copilot is terrible, its also perfect for enterprise.
Firefox has over the last few years become a starkly friendly safe haven for privacy focused work. I feel, or at least felt, completely safe performing sensitive work with their product. Work being the keyword in enterprise. Things that become a toy or a risk go away. Chrome use in similar orgs to mine is decreasing for various similar reasons.
I am not sure if you think losing a decent chunk of enterprise users with sensitive requirements is worth it but at least you spent your time defending Firefox from criticism that might have retained those users? And when Firefox is just feature for feature exactly doing what chrome is doing, you might see new users show up for some reason? I don't see that as a winning strategy.
I really don't think they should play the ice cream vendor game at all. Let Edge and Chrome play in that space. Give firefox room to not suck. Let firefox be an actual alternative with significant points of difference not just an also ran.
I don't believe in sacred cows, but unlike you I do not believe there is a cow so profane that it should be vilified at every turn.
All you are doing is teaching Mozilla to ignore you.If they do bad you get angry. If they do good you get angry because it isn't good enough.
They cannot win! So tell me, why they should listen. Maybe they can do good enough by you but by doing so that won't be good enough by millions of others. And why should they optimize for one person?
Do not conflate my disbelief in demons as a belief in gods. You are confused at who is the religious one here
This is how Firefox fell behind Chrome and bled their entire market share. The strategy of letting Chrome out innovate them and then copy what they think is good is not a strategy that works.
I'm quite sure only "we" care about esoteric browser features.
"Does it have tabs? OK. Fine."
Firefox losing market share were probably more due to Google nagging desktop users than features.
What did Chrome have that Firefox didn't?
You and your argument are reductionist.
Not crashing frequently due to having proper tab isolation and a sane extention system.
Firefox fell behind Chrome because of aggressive marketing from Google in a way that probably violates some antitrust laws if they were actually being enforced, combined with a couple of own-goals from Mozilla.
Basically Google exploits their market dominance in Search and Mail to get people to use Chrome (and probably their other services too). When you search in a non-Chrome browser, you'll be constantly informed by Google about how much better their search is with Chrome through pop-ups and in-page notifications (not browser notifs). If you click a link in the Gmail app on iOS, rather than opening the browser, you get a Chrome advertisement if they detect it isn't your default browser.
This goes hand-in-hand with Chrome being the default Android browser (don't underestimate the power of being the default) and Mozilla alienating their core audience of power users by forcibly inserting features that those power users despise.
Chrome never won on features, it won on marketing and abuse of a different monopoly.
It works pretty well for Apple
Not on desktop. They are losing market share to chrome year over year.
> Mozilla could reliably wait 24 months and follow if features are actually in demand and being used.
I'm also wondering how much of what they come up with could be implemented as an addon instead of a core part of the browser.
>Without AI enabled features + agent mode being first class citizens, this will be a non-starter in 2 years.
I want an application to serve me webpages and manage said webpages. It wasn't a "non-starter" for me 2 years ago when I switched off Chrome who chose to be too user hostile to ignore. It won't be a non-starter here.
>I want my non-tech family members/friends to install Firefox not because I come over at Christmas, but because they want to. Because it's a browser that "just works." We can't have this if Firefox stays in the pre-ai era.
If "it just works" is all my non-tech family needs, I'm not really gonna intervene and evangelize for Mozilla. I don't work for them (if you do, that's fair). Most browsers "just work" so mission accomplished. These are parents who were fine paying Hulu $15/month to still see ads, so we simply have different views. I'm sure they felt the same way about my pots falling apart and insisting "well, they still work".
Meanwhile, my professional and personal career revolves around the internet, and I don't want to be fighting my screwdriver because it wants to pretend to be a drill. At some point I will throw the drill out and buy a screwdriver that screws.
If you want to be a power user, then be a power user. Switch to a fork like Mullvad or Waterfox.
Seriously, I don't get the problem here. I don't want AI in my browser either, but it is pretty simple for us who care to switch away. It's even easy for those who aren't technically skilled!
If this is what stops Firefox from drowning then I'm all for it. They are our last line of defense from a Google controlled internet. What Firefox puts in their browser isn't that critical to me (i.e. doesn't affect me) as long as it stays open source and there are forks. But Firefox dying does! So yeah, I'm gonna root for Firefox even when it does things I don't want because what I care about far more than any specific browser feature is the internet not being controlled by any single entity.
So can we make sure we're fighting the right battles?
>If this is what stops Firefox from drowning
By having the most engaged users leave and splinter the community furtehr in hopes that Mozilla out competes the trillion dollar monopoly in a war of data centers? We really will do anything except message your policy makers, huh?
I migrated several times before and am browsing around now. Id even be so bold to say that Mozilla dying will not kill the Quantum web engine scene, so I have no allegiance to rooting for yet another billion dollar corporation that has continually proven that their customers are not their interest. But I don't think "no one cares about you, just leave" is the healthiest option for your stated goals.
>Can we make sure we're fighting the right battles?
This isn't an argument, this is my statement. I don't want my tools to be bloated.
If the entire idea of a web browser collapses overnight, I'll make due. But the last thing I'll be accused of is remaining silent and having this industry bewildered on how this apocalypse came upon them. They got feedback and ignored it, stopped competing on merits over trends, and lost sight of what people actually want. That's their choice, and they will reap what they sow.
Seriously, I don't get the problem. As far as I'm aware the only things that are opt-out are the smart tabs. Which is a 22.6MB and 57MB vector embedding models that can be easily removed.
So what bloat?
They said everything will be opt-in and that's cool with me. If they go back on that then yeah, grab the fucking pitchforks and I'll be right there with you because fuck those lying bastards. But so far they haven't done that so put the pitchfork down. All you're doing is being over zealous and attacking the greater enemy's (Chrome) enemy. You don't have to call Firefox your ally but it's pretty clear that Chrome is significantly more misaligned with your wants than Firefox is. So stop attacking the best thing we got right now.
I'm pretty confident this is happening because all the privacy maximalists are not actually privacy maximalists and would rather lick the fucking boot than take a knight who's armor isn't pure. Doesn't matter if that boot is orange and shaped like a lion, it's still the boot.I'm sorry the choices are slim. I really do wish it was better. But we're never going to get there if we kill the last thing standing against the monopoly. So yeah, pick the right fucking battle.
>I bet you're grateful for that person too even if you don't know it.
I'm grateful for actions, not words. I don't see much action here.
>Seriously, I don't get the problem.
Easy to be blind when you choose not to see. I don't have much to add to neither the llm nor Mozilla debate. We have plenty of literature on the issues with both.
You're not paying attention
Or
The lack of perfection, and existence of opposition, blinds you to such action.
It isn't words I expect you you be grateful for, it's actions.
While it's far from perfect and far from where I'd personally like things to be, there is progress being made on the privacy and digital rights fronts. We've won the right to repair. We've struck down strong efforts to kill net neutrality. Several states have stronger data rights for their citizens[0]! And we've done much more! It's an uphill battle but that doesn't mean it's one we're losing!
You're right!But I'm not the one making everything black and white. It's good to strive for perfection, but if you're waiting till it's achieved then you'll be waiting for eternity.
Worse, if you're hyper critical at progress towards the world we want then we'll never get there. If you complain when a company does wrong and when a company makes a step in the right direction that's not big enough then why should they ever listen to you? All you've told them is that there's nothing they can do that will make you[1] happy. So why should they listen? Why should they even try?
You play a dangerous game. One we already know how it plays out...
[0] https://iapp.org/resources/article/us-state-privacy-legislat...
[1] and everyone else like you that is not 100% aligned in your wants. No company can make all of you perfectly happy at the same time because you are not identical people.
I think the core of the issue is that Mozilla is thinking big. They're not happy to service a niche well (which the majority of the comments on Mozilla related posts is generally asking them to), they want to get back to their glory days, capture the mainstream.
And that is tough. Chrome won because it was an, at the time, superior product, AND because it had an insane marketing push. I remember how it was just everywhere. Every other installer pushed Chrome on you, as well as all the Google properties, it was all over the (tech) news, shaping new standards aggressively etc. Not something Mozilla can match.
But they just won't give up. I don't know if I should applaud that or not, but I think it's probably the core of the disconnect between Mozilla and the tech community. They desperately want to break into the mainstream again, their most vocal supporters want them to get a reality check on their ambitions.
If I was running Mozilla, I'd probably go for the niche. It's less glamorous, but servicing a niche is relatively easy, all you have to do is listen to users and focus on stuff they want and/or need. You generally get enough loyalty to be able to move a bit behind the curve, see what works for others first, then do your own version once it's clear how it'll be valuable to the user base. I'd give this strategy the highest chance of long term survival and impact.
Mainstream is way tougher. You kinda need to make all kinds of people with different needs happy enough, and get ahead of where those wants and needs are going.
One could argue they could do both: Serve a niche well with Firefox and try to reach the mainstream with other products. I think to some degree they've tried it, with mixed results.
The mainstream didn't get mainstream by striving to go with mainstream. They got there by serving a niche well and then expanding the niche. Trying to go mainstream without having a niche moat will make you lag behind the establishment endlessly.
I'm not an Apple fan (rather an Apple hater if you would), but they are a perfect example of this. First, have a top quality niche product, then go into the big waters with the vision you got from the niche - and then people will actually be willing to give up bells and whistles the product that is good enough.
Mozilla have a well-established niche with a vision, but they can't monetize without giving up the vision they have (and apparently consider opening for small direct donations or maybe even direct bug/feature crowdsourcing not worth it). So they keep jumping on every sidetrack. And keep losing even the niche they have.
> 1. This is great and Mozilla is listening to it's core fans
It's not great. Great would be "we'll stop wasting money on extraneous features and we'll concentrate on making Firefox the best browser".
This is damage control.
> this will be a non-starter in 2 years.
Why though? Seriously.
Yeah, most of the browsers "with AI" are not existing because they're so incredibly useful. They're there because it's a hype, because their parent companies have invested billions and they need to show their shareholders it's actually being used by people. So they ram it in our faces, left right and center. They're not doing this to help us, they're helping themselves.
Mozilla doesn't need to play that game because they're not selling any AI.
We are still in the exploratory phase of what features are useful or not.
I could see describing images useful for blind or vision impaired people. Publishers often have a large back catalogue of documents where it is both impractical and too costly/time consuming to get all the images in those described with alt tags. This is one area where the publishers would be considering using AI.
Text-to-speech and speech recognition also fall under the category of AI and these have proven useful for blind/visually impaired people and for people with injuries that make it difficult to use a mouse and keyboard.
On the search side it would be interesting to see if running the user's query through an encoder and using that to help find the documents would help improve finding search results. This would work like current TF-IDF (Term Frequency, Inverse Document Frequency) techniques work.
Exploratory yes but when you're still exploring it's not a great idea to bet a whole company on something that might not pan out. Or to dump stuff on users with lots of promotion that may or not be actually useful.
And yeah for accessibiity a lot of this is hugely useful but that's a very niche usecase.
And yes AI-assisted search (or deep research, even better) is also something I like. But it's also because search engines themselves have become so enshittified and promote bad content first.
Do you ever need a website you're visiting translated?
Have you ever not understood a term or phrase on a website and had to go to wikipedia/urbandictionary/google to explain it?
Have you ever wanted to do a 'fuzzy search' of a 300 page document (where you don't know the exact string of text to ctrl-f, but want to see where they talk about a particular topic)?
>Do you ever need a website you're visiting translated?
Yes, I have an extension for that.
>Have you ever not understood a term or phrase on a website and had to go to wikipedia/urbandictionary/google to explain it?
I have an extension that double clicks and brings up a quick definition. If I need more, I will go to the dictionary.
>Have you ever wanted to do a 'fuzzy search' of a 300 page document (where you don't know the exact string of text to ctrl-f, but want to see where they talk about a particular topic)?
No, not really. Ctrl + F search for a dozen substrings, use table of contents if available, and I can narrow it down. This takes a few minutes.
And if I did, I'd find an extension. You see the pattern here? We solved this issue decades ago.
The pattern is that I seek out useful tools. I don't wait for them to be rained down and force fed to me. Just because meat is useful to me doesn't mean I want to be subscribed to a grocery store who will deliver meat every month. That's an overkill of resources for most consumers.
You literally have to download the language models if you want to enable translation. That's "opt-in" not "opt-out"...
> Do you ever...
There probably is a big difference between 'do you ever' and 'how often do you'.
I very rarely visit websites that I want translated; so rarely that I can tolerate google translate, or copying and pasting a section of a page into a tab with gemini; so on its own, this feature wouldn't sell me a browser. Besides, as a sibling comment says, even the current non-AI-enhanced browsers offer, sometimes too intrusively, to translate a page in a non-matching language. At least Chrome does this to me.
Your second scenario happens much more frequently; but again, it is so frictionless to type the term or a phrase in a search box in another tab that I never find myself wishing for a dedicated panel in the browser that could do this for me.
Your third scenario, for me, is finding something in api docs. Like, what's that command again to git cherry-pick a range of commits? So far, just googling this stuff or asking copilot / gemini in a separate tab has always worked. I am not sure I would be upset at a browser that didn't have an inbuilt tool for doing this.
What I do want from a web browser is evergreenness, the quickest and fullest adoption of all the web specs, and great web developer tools.
> Do you ever need a website you're visiting translated?
Yes. Firefox and Chrome already offer this.
> Have you ever not understood a term or phrase on a website and had to go to wikipedia/urbandictionary/google to explain it?
Yeah. And?
> Have you ever wanted to do a 'fuzzy search' of a 300 page document (where you don't know the exact string of text to ctrl-f, but want to see where they talk about a particular topic)?
No because I ctrl-f for that topic/key words and find the text.
These are incredibly poor AI sells...
>Yes. Firefox and Chrome already offer this.
yes, both use machine learning methods to translate pages. You're already using AI and don't realize it.
Even if they didn't realize it, I don't believe they were arguing that firefox and chrome didn't/wouldn't use machine learning already, rather that they just thought the use cases you provided don't really sell the cost of having a full LLM integrated into every browser install.
This is exactly it.
There's no fucking way there's going to be a "full LLM integrated into every browser." You really think they're going to drop in a 20GB-200GB model with every browser? Mind you, Llama-8B is over 15GB.
Nah. So far they are doing about 50MB per language translation that you ask for[0]. You have to explicitly install languages to translate.
There's neither "a full blown LLM" (whatever that means) nor forcing AI onto you. You still have to download the language packs, they are just offering an extension that more seamlessly integrates with the browser.
And we know what they're building too! Go look in the "Labs" tab and you'll see an opt-in for testing a semantic search of your history. That doesn't take an LLM to do, that takes a vector embedding model. What next? Semantic search in page? How terrible of a feature! (But seriously, can we get regex search?)
[0] https://hacks.mozilla.org/2022/06/neural-machine-translation...
"AI" as it's used nowadays is unfortunately usually a shorthand for LLM. When firefox talks about "AI features", I think most people interpret that as "LLM integration", not the page-translation feature that's been around for ages.
LLMs are sequence-to-sequence like language translation models, were invented for the purpose of language models, and if you were making a translator today it would be structured like an LLM but might be small and specialized.
For practical purposes though I like being able to have a conversation with a language translator: if I was corresponding with somebody in German, French, Spanish, related European languages or Japanese I would expect to say:
and then get something that I can understand enough to say And also run a reverse translation against a different model, see that it makes sense, etc. Or if I am reading a light novel I might be very interested in>Starting today, Google Translate uses advanced Gemini capabilities to better improve translations on phrases with more nuanced meanings like idioms, local expressions or slang.
https://blog.google/products/search/gemini-capabilities-tran... [Dec 12, 2025]
Firefox is local
I think it's simpler than that. AI is fast becoming synonymous with something being force fed and generally unwanted.
I mean a Firefox download is 150MB, not 16GB...
Plus, we know what Firefox is looking to do. In their labs tab they let you opt into trying out semantic search of your history. So that's a vector embedding model, not an LLM.
Edit:
Okay, they have "Shake to summarize". But that's a shortcut to Apple Intelligence. Nothing shipped with the browser. Similarly I don't understand how the chatbot window is so controversial. I̶'̶m̶ ̶n̶o̶t̶ ̶s̶u̶r̶e̶ ̶w̶h̶a̶t̶ ̶t̶h̶e̶ ̶s̶h̶o̶r̶t̶c̶u̶t̶ ̶i̶s̶ ̶o̶n̶ ̶l̶i̶n̶u̶x̶ ̶o̶r̶ ̶w̶i̶n̶d̶o̶w̶s̶,̶ ̶but are people really pressing <C-x> on a mac or ctrl+alt+x on linux/windows? It's not a LLM shipped with the browser, it is just a window split ("shortcut") to literally one of the most popular websites on the internet right now (ChatGPT is literally the #5 most visited website and you all think AI is unpopular?!). Adding shortcuts isn't shoving AI down your throat. Are they shoving Wikipedia down your throat because you can do "!w hacker news"? Give me a break guys
That’s different from an agentic browser in a few key ways.
Most importantly it’s far more difficult for a bad actor to abuse language translation features than agentic browser features.
Nowadays they call AI everything. Browsers translate websites from decades, when AI was only a word you would see in science fiction movies.
AI is a broad term going back to 1955. It covers many different techniques, algorithms, and topics. The first AI chess programs (DeepBlue, et. al.) were using tree search algorithms like alpha-beta pruning that are/were classified as AI techniques.
Machine translation is a research topic in AI because translating from one language to another is something humans are good at while computers are not traditionally.
More recently, the machine learning (ML) branch of AI has become synonymous with AI as have the various image models and LLMs built on different ML architectures.
Okay, what's the problem? The UX of Google Translate is fine
- it will pop up when it senses a webpage in a language you don't speak.
- it will ask if you want to translate it. You have options to always translate this language or to never do it.
- it will respect your choice and no pop up every-time insisting "no please try it this time". Or worse, decide by default to translate anywyay behind my back.
- There are settings to also enable/disable this that will not arbitrarily reset whenever the app updates.
There are certainly environmental issues to address, but I've accepted that this US administration is not going to address this in any meaningful way. Attacking individuals will not solve this issue so I'm not doing this. So for now, my main mantra is "don't bother me". the UX of much AI can't even clear that.
Alternatively: they’re already taking advantage of the AI features they like without at all needing “AI in the browser” and do realise it.
"See but what if instead of ctrl+F, we fired up a nvdia gpu in a data center instead .... "
I'll pass. Also for people that already know another language, the auto translate is annoying, inaccurate, and strips all of the humanity out of the writing.
>Do you ever need a website you're visiting translated?
This feature doesn't seem like it needs a "first class agent mode."
>Have you ever not understood a term or phrase on a website and had to go to wikipedia/urbandictionary/google to explain it?
I already have right-click for that the old-fashioned way. Not sure how an "AI mode" would make it meaningfully better.
>Have you ever wanted to do a 'fuzzy search' of a 300 page document (where you don't know the exact string of text to ctrl-f, but want to see where they talk about a particular topic)?
This feature is the most usefully novel of the bunch but again doesn't seem like it needs a "first-class-citizen agent mode."
I have a hunch that the "first-class-citizen AI features" that instead will be pushed on us will be the ones that help Google sell ads or pump up KPIs for investors; Firefox doesn't need to jump on that hype train today.
Agent mode feels more like "Let the agent mode place your food delivery order for you?" No thanks, I don't think that's actually gonna give me my first choice, or the cheapest option...
Because the future and market is certain, don’t you know?
> Without AI enabled features + agent mode being first class citizens, this will be a non-starter in 2 years.
The confidence with which people say these things...
s/AI/NFT and I've heard this exact sentence many times before.
NFT was always a meme and crypto has proven its staying power.
Gambling has also proven its staying power. A low trust society and some early coin explosions will do that. I don't think its staying power is here in a healthy way, personally.
Crypto has proven that it can bribe governments into pouring tax money into it. It still hasn't shown any use.
Thats not a reason for crypto being useless, anything can bribe corrupt governments to pour tax money into it.
Crypto has shown people are willing to use it as a currency for investment and day to day transactions. Its held value for a significant amount of time. The tech is evolving still and people see a lot of value in having a currency that operates outside of Governments in a decentralized way even if some people will misuse that freedom.
> day to day transactions
Where is this happening?
Between me and my drug dealer
Money laundering? Certainly.
Black market goods? Of course.
Avoiding taxation? Absolutely.
Day to day purchases? Not that I've seen.
Crypto is going to be a new settlement layer thats it. You'll use stripe and they will settle it on their public chain. You are free to use the chain directly but no real consumer is going to do that.
NFT was a meme in "People are going to buy my jpeg"
But as a protocol it has legs and is still used under the hood in projects.
Cryptokitties was always the best monetisation use case for NFTs, and its still going.
Hacker News was borderline insufferable during the 2022/23 NFT craze when all the startups, investments, and headlines were going into whatever new disruption NFTs/blockchain were allegedly going to cause.
At least with AI I do get some value out of asking Gemini questions. But I hardly need or want my web browser to be a chatbot interface.
The metaverse is clearly the future. Zuckerberg said so, after all.
Browsers without metaverse integration will be a non-starter.
Comparing LLMs to NFT isn't fair. Being able to talk to you computer and have it understand you and even do the things you ask is literally StarTrek technology.
I've never seen a technology so advanced be so dismissed before.
I totally agree. It’s just going to become an expectation that AI is in the browser.
It’s so nice just to be able to ask the browser to summarize the page, or ask questions about a long article.
I know a lot of people on Hacker News are hostile to AI and like to imagine everybody hates it, but I personally find it very helpful.
>It’s just going to become an expectation that AI is in the browser.
Why? Is there evidence to back this up? Are there massive customer write in campaigns trying to convince browser companies to push more AI?
>I know a lot of people on Hacker News are hostile to AI and like to imagine everybody hates it, but I personally find it very helpful.
I love it. I love going to the AI place and knowingly consulting the AI for tasks I want the AI to perform. That relationship is healthy and responsible. It doesnt need to be in everything else. Its like those old jokes about how inventions are just <existing invention> + <digital clock>.
I dont need AI on the desktop, in microsoft office, replying to me on facebook, responding to my google searches AND doing shit in my browser. One of these would be too much, because I can just access the AI I want to speak to whenever I want it. Any 2 of these is such substantial overkill. Why do we have all of them? Justify it. Is there a user story where a user was trying to complete a task but lacked 97% accurate information from 5 different sources to complete the task?
The evidence is the billions of people who copy text to and from ChatGPT to other web pages.
But not just other web pages, other apps too. So again, why a browser. Why ask firefoxs pop out expansion panel instead of a right click context menu in word, or a chat interface in copilots phat app or a website chat interface or the precious space under a google search or any of the other 10000 places people are inserting this shit.
And a copy paste task isnt necessarily going to be aided by a pop out sidebar running a local LLM chewing up already precious RAM. There's no guarantee its going to integrate correctly with the users chosen LLM provider.
Like we are looking at having LLM's inserted into almost every customer facing application. At some point, they will want a subscription for each of them or they are all going to need local resources. They are all going to have to be interoperable and run off the same account. Or you are going to have to have something that just works with the whole stack.
It doesn't make a lick of sense to try and preempt that situation, with mainline features pushed to all customers.
Googles approach, having a separate AI enabled browser makes the most sense. If it takes off its because of affirmative user consent and they can merge it into chrome. If it doesn't work they can silently discontinue it like so many other things.
Google has a separate browser for AI features? Gemini has just shown up in Chrome for me.
It’s of course a mess and a mad rush for market share right now. That’s just a product of a healthy, competitive market. I agree that I’d rather have one AI service I pay for that integrates with all my apps.
AI integration in apps is about being able to feed in the context from the app into the model, like the web page or document. It’s much nicer to just ask the model about what you’re looking at directly rather than having to copy in context. I don’t have market research on this, but I do believe customers will expect it.
Someday hopefully the OS will allow apps to expose context and actions for a systemwide AI assistant. This is what Apple is trying to do with their Apple Intelligence for instance. If this works well, that’d be great.
Considering pirating the whole internet and boiling the planet is required to summarize a single page in a mediocre manner, it’s understandable that people who knows how the sausages are made are against it.
We need some regulation on them for sure. They should be paying for the content they train on and use in their search results.
They’re still very compelling as a user.
> They’re still very compelling as a user.
Very not.
>but I personally find it very helpful.
Options are nice. They were (and poteitally will) not making it optional and if people like me weren't "hostile to Ai" they wouldn't have had to back-track with this.
It is already optional in Firefox, this is just FUD
The FUD is the implications of making it opt out, with reports that there's already other features that requires changing the settings/flags in order to "opt out".
It's doubt based on previous actions.
then you can install an extension.
I’m fine with an extension personally. And I don’t use Firefox to begin with, so I don’t particularly care what they do.
I just think the average browser user in 5-10 years will expect the AI features. And plenty of others won’t want to use those features, and that’s fine.
If I wanted the average browser, I would have stuck with Chrome, or Edge.
Why does the browser itself need AI features ?
You can still easily visit chagpt via web if Gemini or whatever
It's obvious to you. But many people will think that Firefox doesn't support (accessing) AI unless that feature is prominently displayed.
Most people don't understand how the web works.
Lots of disagreement, but from necessity I (sort of) agree. Firefox foremost needs users. If it takes AI features to get them, so be it. However, Firefox cannot afford to lose its loyal user base, so they have to be optional.
Honestly if it were up to me, yes I'd love Firefox to stay in the niche, but they have to follow the market if they want to stay relevant. I just hope they can push more adoption.
I am highly sceptical of all AI features and it seems Gardner and other cyber security experts are starting to wake up as well:
The programs let you outsource and automate tasks, such as online searches or writing an email, to an AI agent. The only problem is that these same AI capabilities can be tricked into executing malicious commands hidden in websites or emails, effectively turning the browser against the user.
https://www.pcmag.com/news/security-experts-warn-companies-t...
For now this is restricted to Perplexity Comet and OpenAI Atlas and only the UK has issued an official warning, but why would I, personally, want my Firefox browser with an opt-out risk instead of an opt-in?
Yeah, that's like having a browser without without support for blockchain, semantic web or UML! No one would use it without these absolutely critical features!
I'm surprised your take is so controversial. This really is it - yes, the current core users are not interested in AI but most people in our lives who are not techies do use them, and Firefox needs win these users if it wants to stay relevant.
Of course, I have opinions on other ways it could make money instead of jumping on the latest hot thing (pocket, fakespot, VPN, etc) without actually truly building the ecosystem but at least they are trying.
I'd love to live in your world for a bit... I can't imagine any future where having AI in your browser is a net positive for any user. It sounds like an absolute dystopian privacy and security nightmare.
Why?
Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
Imagine the browser asks you at some point, whether you want to hear about new features. The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN", "Please show me a summary once a month", "Show timely, non-modal notifications at appropriate times".
Imagine you choose the second option, and at some point, it offers you a feature described as follows: "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title". Would you enable it?
>Why?
Do we have to re-tread 3 years of big tech overreach, scams, user hostility in nearly every common program , questionable utility that is backed by hype more than results, and way its hoisting up the US economy's otherwise stagnant/weakening GDP?
I don't really have much new to add here. I've hated this "launch in alpha" mentality for nearly a decade. Calling 2022 "alpha" is already a huge stretch.
>When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
Why is this valuable? I spent my entire childhood reading, and my college years being able to research and navigate technical documents. I don't value auto-summarizations. Proper writing should be able to do this in its opening paragraphs.
>Imagine the browser asks you at some point, whether you want to hear about new features. The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN", "Please show me a summary once a month", "Show timely, non-modal notifications at appropriate times"
Yes, this is my "good enough" compromise that most applications are failing to perform. Let's hope for the best.
>Imagine you choose the second option, and at some point, it offers you a feature described as follows: "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title". Would you enable it?
No, probably not. I don't trust the powers behind such tools to be able to identify what is "clickbait" for me. Grok shows that these are not impartial tools, and news is the last thing I want to outsource sentiment too without a lot of built trust.
meanwhile, trust has only corroded this decade.
> Imagine you have an AI button. When you click it, the locally running LLM
sure, you can imagine Firefox integrating a locally-running LLM if you want.
but meanwhile, in the real world [0]:
> In the next three years, that means investing in AI that reflects the Mozilla Manifesto. It means diversifying revenue beyond search.
if they were going to implement your imagination of a local LLM, there's no reason they'd be talking about "revenue" from LLMs.
but with ChatGPT integrating ads, they absolutely can get revenue by directing users there, in the same way they get money for Google for putting Google's ads into Firefox users' eyeballs.
that's ultimately all this is. they're adding more ads to Firefox.
0: https://blog.mozilla.org/en/mozilla/leadership/mozillas-next...
not to mention the high resource-usage of a local LLM that most PCs wouldn't be able to handle, or would just drain a laptop's battery.
All for searching something trivial, where for 99% of cases the already indexed wikipedia summary is good enough and way faster
>Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
but.. why? I can read the website myself. That's why I'm on the website.
People have a limited amount of time, so they may prefer spending it on something else than what a computer can do for them.
> When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
I'm also now imagining my GPU whirring into life and the accompanying sound of a jetplane getting ready for takeoff, as my battery suddenly starts draining visibly.
Local LLMs for are a pipe dream, the technology fundamentally requires far too much computation for any true intelligence to ever make sense with current computing technologies.
Most laptops are now shipping with a NPU for handling these tasks. So it wont be getting computed on your GPU.
That doesn't mean anything, it's just a name change. They're the same kind of unit.
And whatever accelerator you try to put into it, you're not running Gemini3 or GPT-5.1 on your laptop, not in any reasonable time frame.
Over the last few decades I've seen people make the same comment about spell checking, voice recognition, video encoding, 3D rendering, audio effects and many more.
I'm happy to say that LLM usage will only actually become properly integrated into background work flow when we have performant local models.
People are trying to madly monetise cloud LLMs before the inevitable rise of local only LLMs severely diminishes the market.
Time will tell, but right now we're not solving the problem of running LLMs by increasing efficiency, we're solving it by massive, unprecedented investments in compute power and just power. Companies definitely weren't building nuclear power stations to power their spell checkers or even 3D renderers. LLMs are unprecedented in this way.
True, but the usefulness of local models is actually getting better. I hope that the current unprecedented madness is a factor of the potential of cloud models, and not a dismissal of the possibility of local models. It's the biggest swing we've seen (with the possible exception of cloud computing vs local virtualisation) but that may be due to recognition of the previous market behaviour, and a desperate need to not miss out on the current boom.
Also it does mean something. An NPU is completely different from your 5070. Yes the 5070 has specific AI cores but it also has raster cores and other things not present in an NPU.
You dont need to run GPT5.1 to summerize a webpage. Models are small and specialized for different tasks.
And all of that is irrelevant for the AI use case. The NPU is at best slightly more efficient than a GPU for this use case, and mostly its just cheaper by forgoing various parts of a GPU that are not useful for AI (and would not be used during inferencing anyway).
And the examples being given of why you'd want AI in your browser are all general text comprehension and conversational discussions about that text, applied to whatever I may be browsing. It doesn't really get less specialized than that.
No, NPUs are designed to be power efficient in ways GPU compute aren't.
You also don't need Gemini3 or GPT anything running locally.
Personally, I don't need AI in my browser at all. But if I did, why would I want to run a crappy model that can't think and hallucinates constantly, instead of using a better model that kinda thinks and doesn't hallucinate quite as often?
I generally agree with you, but you'd be surprised at what lower parameter models can accomplish.
I've got Nemo 3 running on an iGPU on a shitty laptop with SO-DIMM memory, and it's good enough for my tasks that I have no use for cloud models.
Similarly, Granite 4 based models are even smaller, just a couple of gigabytes and are capable of automation tasks, summarization, translation, research etc someone might want in a browser.
Both do chain of reasoning / "thinking", both are fast, and once NPU support lands in runtimes, they can be offloaded on to more efficient hardware.
They certainly aren't perfect, but at least in my experience, fuzzy accuracy / stochastic inaccuracy is good enough for some tasks.
That's the point. For things like summarizing a webpage or letting the user ask questions about it, not that much computation is required.
An 8B Ollama model installed on a middle of the road MacBook can do this effortlessly today without whirring. In several years, it will probably be all laptops.
But what you would want to summarize a page. If I'm reading a blog, that means that I want to read it, not just a condensed version that might miss the exact information I need for an insight or create something that was never there.
You can also just skim it. It feels like LLM summarization boils down to an argument to substitute technology for media literacy.
Plus, the latency on current APIs is often on the order of seconds, on top of whatever the page load time is. We know from decades [0] of research that users don't wait seconds.
[0] https://research.google/blog/speed-matters/
It makes a big difference when the query runs in a sidebar without closing the tab, opening a new one, or otherwise distracting your attention.
> without closing the tab, opening a new one, or otherwise distracting your attention.
well, 2/3 is admirable in this day and age.
You don't use it to summarize pages (or at least I don't), but to help understand content within a page while minimizing distractions.
For example: I was browsing a Reddit thread a few hours ago and came upon a comment to the effect of "Bertrand Russell argued for a preemptive nuclear strike on the Soviets at the end of WWII." That seemed to conflict with my prior understanding of Bertrand Russell, to say the least. I figured the poster had confused Russell with von Neumann or Curtis LeMay or somebody, but I didn't want to blow off the comment entirely in case I'd missed something.
So I highlighted the comment, right-clicked, and selected "Explain this." Instead of having to spend several minutes or more going down various Google/Wikipedia rabbit holes in another tab or window, the sidebar immediately popped up with a more nuanced explanation of Russell's actual position (which was very poorly represented by the Reddit comment but not 100% out of line with it), complete with citations, along with further notes on how his views evolved over the next few years.
It goes without saying how useful this feature is when looking over a math-heavy paper. I sure wish it worked in Acrobat Reader. And I hope a bunch of ludds don't browbeat Mozilla into removing the feature or making it harder to use.
And this explanation is very likely to be entirely hallucinated, or worse, subtly wrong in ways that's not obvious if you're not already well versed in the subject. So if you care about the truth even a little bit, you then have to go and recheck everything it has "said".
Why waste time and energy on the lying machine in the first place? Just yesterday I asked "PhD-level intelligence" for a well known quote from a famous person because I wasn't able to find it quickly in wikiquotes.
It fabricated three different quotes in a row, none of them right. One of them was supposedly from a book that doesn't really exist.
So I resorted to a google search and found what I needed in less time it took to fight that thing.
And this explanation is very likely to be entirely hallucinated, or worse, subtly wrong in ways that's not obvious if you're not already well versed in the subject. So if you care about the truth even a little bit, you then have to go and recheck everything it has "said".
It cited its sources, which is certainly more than you've done.
Just yesterday I asked "PhD-level intelligence" for a well known quote from a famous person because I wasn't able to find it quickly in wikiquotes.
In my experience this means that you typed a poorly-formed question into the free instant version of ChatGPT, got an answer worthy of the effort you put into it, and drew a sweeping conclusion that you will now stand by for the next 2-3 years until cognitive dissonance finally catches up with you. But now I'm the one who's making stuff up, I guess.
Unless you've then read through those sources — and not asked the machine to summarize them again — I don't see how that changes anything.
Judging by your tone and several assumptions based on nothing I see that you're fully converted. No reason to keep talking past each other.
No, I'm not "fully converted." I reject the notion that you have to join one cult or the other when it comes to this stuff.
I think we've all seen plenty of hallucinated sources, no argument there. Source hallucination wasn't a problem 2-3 years ago simply because LLMs couldn't cite their sources at all. It was a massive problem 1-2 years ago because it happened all the freaking time. It is a much smaller problem today. It still happens too often, especially with the weaker models.
I'm personally pretty annoyed that no local model (at least that I can run on my own hardware) is anywhere near as hallucination-resistant as the major non-free, non-local frontier models.
In my example, no, I didn't bother confirming the Russell sources in detail, other than to check that they (a) existed and (b) weren't completely irrelevant. I had other stuff to do and don't actually care that much. The comment just struck me as weird, and now I'm better informed thanks to Firefox's AI feature. My takeaway wasn't "Russell wanted to nuke the Russians," but rather "Russell's positions on pacifism and aggression were more nuanced than I thought. Remember to look into this further when/if it comes up again." Where's the harm in that?
Can you share what you asked, and what model you were using? I like to collect benchmark questions that show where progress is and is not happening. If your question actually elicited such a crappy response from a leading-edge reasoning model, it sounds like a good one. But if you really did just issue a throwaway prompt to a free/instant model, then trust me, you got a very wrong impression of where the state of the art really is. The free ChatGPT is inexcusably bad. It was still miscounting the r's in "Strawberry" as late as 5.1.
> I'm personally pretty annoyed that no local model (at least that I can run on my own hardware) is anywhere near as hallucination-resistant as the major non-free, non-local frontier models.
And here you get back to my original point: to get good (or at least better) AI, you need complex and huge models, that can't realistically run locally.
You can just look down thread at what people actually expect to do - certainly not (just) text summarization. And even for summarization, if you want it to work for any web page (history blog, cooking description, github project, math paper, quantum computing breakthrough), and you want it accurate, you will certainly need way more than Ollama 8B. Add local image processing (since huge amounts of content are not understandable or summarizable if you can't understand images used in the content), and you'll see that for a real 99% solution you need models that will not run locally even in very wild dreams.
Sure. Let's solve our memory crisis without triggering WW3 with China over Taiwan first, and maybe then we can talk about adding even more expensive silicon to increasingly expensive laptops.
That last one sounds like a lot of churn and resources for little results? You're not really making them sound compelling compared to just blocking click bait sites with a normal extension somehow. And it could also be an extension users install and configure - why a pop up offering it to me, and why built into the browser that directly?
For any mildly useful AI feature, there are hundreds of entirely dangerous ones. Either way I don't want the browser to have any AI features integrated, just like I don't want the OS to have them.
Especially since we know very well that they won't be locally running LLMs, everyone's plan is to siphon your data to their "cloud hybrid AI" to feed into the surveillance models (for ad personalization, and for selling to scammers, law enforcement and anyone else).
I'd prefer to have entirely separate and completely controlled and fire-walled solutions for any useful LLM scenarios.
> Imagine you have an AI button.
That pretty much sums up the problem: an "AI" button is about as useful to me as a "do stuff" button, or one of those red "that was easy" buttons they sell at Home Depot. Google translate has offered machine translation for 20+ years that is more or less adequate to understand text written in a language I don't read. Fine, add a button to do that. Mediocre page summaries? That can live in some submenu. "Agentic" things like booking flights for an upcoming trip? I would never trust an "AI" button to do that.
Machine learning can be useful for well-defined, low-consequence tasks. If you think an LLM is a robot butler, you're fundamentally misunderstanding what you're dealing with.
> The buttons offered to you are "FUCK OFF AND NEVER, EVER BOTHER ME AGAIN"
I've already hit that option before reading the other ones.
> "On search engine result pages and social media sites, use a local LLM to identify headlines, classify them as clickbait-or-not, and for clickbait headlines, automatically fetch the article in an incognito session, and add a small overlay with a non-clickbait version of the title"
Why would you bother fetching the clickbait at all? It's spam.
The main transformation I want out of a browser, the absolutely critical one, is the removal of advertising. I concede that AI might be decent at removing ads and all the overlay clutter that makes news sites unreadable; does anyone have the demo of "AI readability mode"? Crucially I do not want it changing any non-ad text found on the page.
> Imagine you have an AI button. When you click it, the locally running LLM gets a copy of the web site in the context window, and you get to ask it a prompt, e.g. "summarize this".
They basically already have this feature: https://support.mozilla.org/en-US/kb/use-link-previews-firef...
I like Firefox and don't think it's about to collapse like many users here, but I have already unchecked "Recommend features as you browse" and "Recommend extensions as you browse" along with setting the welcome page for updates to about:blank.
Ideally the user interface for any tool I use should never change unless I actively prompt it to change, and the only notifications I should get would be from my friends and family contacting me or calendars/alarms that I set myself.
Lots of imagining here.
I have already clicked the all-caps button
Most users are entirely ignorant of privacy and security and will make choices without considering it. I don’t say that to excuse it but it’s absolutely the reality.
I don't know. What if the AI can remove all junk from the page, clean it up, and only leave the content - sort of like ublock origin on steroids?
I'd pay a monthly subscription fee for this. All the service would need to do to get my money is guess which words that already exist on the page I will be interested in and show me those words in black-and-white type (in a face and a size chosen by me, not the owner of the web site) free of any CSS, styling or "innovative" manner of presentation.
Specifically, the AI does not generate text for me to read. All it does is decide which parts of the text that already exists on the page to show me. (It is allowed to interact with the web page to get past any modal windows or gates.)
haha, what if I told you that the currently existing, shipping product, "ChatGPT / Gemini uses a browser for you" will have more users than Firefox in two years? I will even bet you that will likely be the case in 2 months.
> any future
> any user
The absolute reactionary response to anything Mozilla does is quite the something to watch, I've never seen another company held to the same standards.
If you read the Mozilla and Firefox related threads over the past week, you'd think Mozilla was the scourge of the internet, worse than DoubleClick in their heyday and worse than Google's hobbling of Chrome.
That said, the AI options for Firefox are opt-in. If you don't want them, don't use them. You are correct in that this is where software is heading, and AI integration is what users will expect going forward.
Just so everyone else knows, the complaining is by definition reactionary.
> In politics, a reactionary is a person who favors a return to a previous state of society which they believe possessed positive characteristics absent from contemporary society.
But I guess HackerNews is infamous for being conservative, so it's not too surprising.
> I've never seen another company held to the same standards.
The only "standard" expected from them is the same as any other for-profit company - "stick to your stated values and don't be duplicitous". For example, Apple, Meta, Microsoft are all lambasted here when they claim to "respect" user privacy and their products do the opposite.
Also, you should note that unlike these BiGTech that make multiple products and services, the company behind Firefox (and Thunderbird) makes only a few products and earns 100's of millions of dollars in annual revenue from it (some here in HN say they currently make more than a half a billion dollars a year now!). That's a lot of money. And yet, most of their products continues to be "shitty" (i.e. subpar). That's why they are losing user base. Instead of really improving their core product, the company just continues to seek new avenues of creating revenues. That's the "MBA CEO mindset" that everyone here in HN usually complain about. Do you want a browser that's faster and light on resources, or a browser that would display even more ads to you right in the browser? (Guess what Firefox prioritised?). Every user of Firefox can already avail ChatGPT (or some other AI service) if they want to. The only reason to embed it onto Firefox is to just make extra money by violating user privacy (we all know AIs are now personal data harvesters), without adding any real value to the browser.
Now, consider the opensource philosophy they espouse. Again, with the 100's of millions of dollars they have in hand, Gecko, the rendering engine of the browser is still not a truly modular piece of code that can be easily used in other projects. And that's by design (this is why most of the browsers that use the Firefox-Gecko codebase are just Firefox clones with superficial changes to the UI and config). If I remember right, Nokia spent considerable effort to try and reuse Gecko (make it modular?) - https://web.archive.org/web/20180830103541/http://blog.idemp... - and Sailfish OS now uses that fork in its mobile browser. (It was only when Mozilla feared that they were losing the mobile browser war that they decided to offer Gecko as a hacky modular codebase for only the Android platform, to be used as webviews or create other browsers. Similar options for Desktop platforms still don't exist).
Isn't all that a valid criticism, whether you are a capitalist or an opensource developer?
At this point they should just bring back Eich and go fully trad ;)
> Without AI enabled features + agent mode being first class citizens, this will be a non-starter in 2 years.
LOL