Fully disagree. I use zero so-called "AI" features in my day to day life. None. Not one. Why do I need them in my browser, and why does my browser need to focus on something that, several years into the hype wave, I still *do not use at all*? And it's not for a lack of trying, the results are just not what I need or want, and traditional browsing (and search engines, etc.) does do what I want.
I'd be elated if Firefox solely focused on "the pre-AI era", as you put it, and many other power users would, too. And I somehow doubt my non-techie family cares - if anything, they're tired of seeing the stupid sparkle icons crammed down their throats at every single corner of the world now.
There are many features you are not using in all your software. Just being there, should not be a problem for people. You should evaluate a software by what it's giving you, and which harm it brings, not by what it's giving others you do not care about.
And so far, we can assume that AI in Firefox will be like all the other stuff people don't care about, just optional, a button here, a menu-entry there, just waiting for interaction, but not harmful.
I agree, why support pushing the masses into another big tech machinery that just rips off their data and collectively makes it worse for all of us again? We are already way too cool with people frying their brains on X, TikTok, Instagram and whatnot. If anything, as devs, we should help people get back to focus on their own lifes over monetization of attentionspans. But this industry has no backbone and is constantly letting people down for a quick buck.
You never translate any websites?
AI tools are here to stay. They will start to creep into everything, everywhere, all the time. Either you recognize the moment at which it becomes a significant disadvantage not to use them (I agree that moment is not now), or get left behind.
The metaverse is here to stay! Blockchain is the future!
Without integrating metaverse and blockchain features into Firefox, Mozilla is at a significant disadvantage compared to other browsers. Don't get left behind!
They did actually jump on metaverse with Firefox reality and Mozilla hubs. Both weren't bad products at all. Both are now cancelled and they have done basically nothing for Mozilla's market position.
Edit: so I mean I agree here in case that wasn't clear
Many thing are "here to stay", should Mozilla also implement a "share with TikTok" functionality into their browser?
> or get left behind.
Last time I heard this phrase it was about VR, and before that it was NFTs. I wished the tech community wasn't so susceptible to FOMO sentiments.
Indeed. I never understood, let alone bought into, the NFT hype, but I think VR is a good reference point for AI:
There was a real, genuine product in the Oculus Rift. It did something that was an incremental improvement over the previous state of the art which enabled new consumer experiences for low cost.
The Metaverse was laughable, and VR got glued to a lot of things where it added zero value, or worse negative value, for example my attempts to watch pre-recorded 3D video gave me nausea because the camera can only rotate, not displace, with my head movements.
Compare and contrast with AI:
LLMs and Diffusion models are also real, genuine products, that are incremental improvement over the previous state of the art which enabled new consumer experiences for low cost.
A lot of the attempts to integrate these AI have been laughable, and have added zero-to-negative value.
Non corporate VR is actually doing some interesting things - but yeah, what Meta did with it was pure garbage.
I didn't mean it as VR being useless - I'm sure it can be useful for some applications or fun for gaming - my point was that you shouldn't fear getting left behind just for not having an Apple Vision Pro app or a land in the Metaverse :)
Another way to see this: Hammers can be useful, the Internet can be useful, but this doesn't mean that as a hammer manufacturer you should make your next hammer an IoT product ASAP or you will be left behind.
Well stated, agreed. :)
Just wanted to note that even after the bad publicity that companies like Meta (ugly avatars, unusable bland virtual spaces) or Apple (overpriced device with no software or content) have given to VR, some people tend to regard it as dead even though there is quite a vibrant user and creator community doing some incredible things (even just what people do with VRChat is amazing!). And there are even companies that seem to get it (Valve).
I don't think it's quite that simple. A great deal of work has nothing to do with computers, and even more human activity has nothing to do with economic advantage. The scope of your statement is a bit too broad in that regard but for computer based work I think you are a) more or less right but b) if you are right it's not clear how much economic benefit LLMs will actually provide on balance, long term.
Does it make the world a better place, and more prosperous? Does it just move economic activity around a bit in regards to who is doing what? We'll find out in ten years when the retrospective economic studies are done.
People wonder why there's a backlash when the pro-AI side sounds like the Borg.
I disagree and I think the moment is now. Gemini 2.5 and now 3.0 is incredible. People that don’t recognize that and use AI tools now are as silly as a craftsman that uses a hammer as a screwdriver when he has a screwdriver in his toolbox. A good craftsman uses the right tool for the job to save time and do a better job and knows the limitations of each tool.
I can spend hours learning photoshop and then trying out color schemes for my new intricately detailed historic house or removing a car from the driveway or I can use Nano Banana and be done in a prompt. There is dignity in learning all that minutiae but I don’t care, I’m not a Photoshop artist, I just want the result and to just move on with my life and get the house painted.
AI crap has already been crammed into everything for months now, and nobody like it nor wants it. There is no proof that AI will continue to improve and no certainty that it will become a disadvantage not to use them. In fact, we are seeing the improvements slow down and it looks like the model will plateau sooner rather then later.
> There is no proof that AI will continue to improve and no certainty that it will become a disadvantage not to use them. In fact, we are seeing the improvements slow down and it looks like the model will plateau sooner rather then later.
While I expect the improvements to slow down and stop, due to the money running out, there's definitely evidence that the models can keep improving until that point.
"Sooner or later", given the trend lines, is sill enough to do to SWEng what Wikipedia did to Encyclopædia Britannica: Still exists, but utterly changed.
> While I expect the improvements to slow down and stop, due to the money running out
This will certainly happen with the models that use weird proprietary licenses, which people only contribute to if they're being paid, but open ones can continue beyond that point.
The hyperscalers are buying enough compute and energy to distort the market, enough money thrown at them to distort the US economy.
Open models, even if 100% of internet users joined an AI@HOME kind of project, don't have enough resources to do that.
You are right, machine learning models usually improve with more data and more parameters. Open model will never have enough resources to reach a compatible quality.
However, this technology is not magic, it is still just statistics, and during inference it looks very much like your run of the mill curve fitting (just in a billion parameter space). An inherent problem in regression analysis is that at some point you have too many parameters, and you are actually fitting the random errors in your sample (called overfitting). I think this puts an upper limit on the capabilities of LLMs, just like it does for the rest of our known tools of statistics.
There is a way to prevent that though, you can reduce the number of parameters and train a specialized model. I would actually argue that this is the future of AI algorithms and LLMs are kind of a dead end with usefulness limited to entertainment (as a very expensive toy). And that current applications of LLMs will be replaced by specialized models with fewer parameters, and hence much much much cheaper to train and run. Specialized models predate LLMs and we have a good idea of how an open source model fares in that market.
And it turns out, open source specialized models have proven them selves quite nicely actually. In go we have KataGo which is one of the best models on the market. Similarly in chess we have Stockfish.
> I use zero so-called "AI" features in my day to day life. None. Not one.
I know so many people who made that same argument, if you can call it that, about smartphones.
I recently listened to a podcast (probably The Verge) talking about how an author was suddenly getting more purchases from his personal website. He attributed it to AI chatbots giving his personal website as the best place to buy rather than Amazon, etc. An AI browser might be a way to take power away from all the big players.
> And it's not for a lack of trying, the results are just not what I need or want, and traditional browsing (and search engines, etc.) does do what I want.
I suspect I only Google for about 1/4 of things I used to (maybe less). Why search, wade through dubious results, etc when you can just instantly get the result you want in the format you want it?
While I am a techie and I do use Firefox -- that's not a growing niche. I think AI will become spectacularly better for non-techies because it can simply give them what they ask for. LLMs have solved the natural language query issue.
> I know so many people who made that same argument, if you can call it that, about smartphones.
Sure, but people also told me I'd be using crypto for everything now and (at least for me) it has faded into total obscurity.
The biggest difference for me is that nobody (the companies making things, the companies I worked for...) had to jam smartphones down my throat. It made my life better so I went out of my way to use it. If you took it away, I would be sad.
I haven't had that moment yet for any AI product / feature.
Any AI product I pay for is great. Any AI product I don't pay for is terrible.
> Any AI product I pay for is great. Any AI product I don't pay for is terrible.
This doesn't sound like the "free sample" model is working then? If I try the free version of product X and it's terrible, that will discourage me from ever trying the paid version.
I think half the people who think AI is incredibly dumb and can't understand why anyone is using it is because they're using the free samples. This whole thing is so horribly expensive that they lose money even on people who pay therefore the free samples are necessarily as minimal as they can get away with.
The free samples worked famously initially to get people to try it initially, though.
But whenever that free Gemini text pops up in my search, I know why people think it's stupid. But that's not the experience I have with paid options.
> Why ... wade through dubious results, etc when you can just instantly get the result you want in the format you want it?
Funnily enough, this is exactly how I justify Googling stuff instead of asking Gemini. Different strokes I guess!
> > I use zero so-called "AI" features in my day to day life. None. Not one.
> I know so many people who made that same argument, if you can call it that, about smartphones.
I had to use a ledger database at work for audit trails because they were hotness. I think we were one of the few that actually used AWS QLDB.
The experience I've had with people submitting AI generated code has been poor. Poor performing code, poor quality code using deprecated methods and overly complex functionality, and then poor understanding of why the various models chose to do it that way.
I've not actually seen a selling point for me, and "because Google is enshittifying its searches" is pretty weak.
I've been posting recently how I refactored a few different code bases with the help of AI. Faster code, higher quality code, overall smaller. AI is not a hammer, it's a Lathe: incredibly powerful but only if you understand exactly what you're doing otherwise it will happily make a big mess.
if you have to understand exactly what you're doing, why not just... do it?
That question completely misunderstands what AI is for. Why would I just do it when the AI did it for me in less time that I could myself and mechanically in a way that is arguably harder for a human to do? AI is surprisingly good at identifying all the edge cases.
i probably don't understand. the main thought i have re: llm coding is, why i would want to talk to a insipid, pandering chatbot instead of having fun writing code?
but, as an engineer, i have to say if it works for you and you're getting quality output, then go for it. it's just not for me.
It seems to me you're coming in with a negative preconceptions (e.g. "insipid, pandering chatbot"). What part about coding is fun for you? What part is boring? Keep the fun bits and take the boring bits and have the LLM do those.
> Faster code, higher quality code, overall smaller.
I'll have to take your word for it, I have yet to see a PR that used AI that wasn't slop.
> AI is not a hammer, it's a Lathe
I would liken it more to dynamite.
> I'll have to take your word for it, I have yet to see a PR that used AI that wasn't slop.
How would you know a non-slop PR didn't use AI?
Why would I accept slop out of the AI? I don't. So I don't have any.
I don't understand the disconnect here. Some people really want to be extremely negative about this pretty amazing technology while the rest of us have just incorporated it into our workflow.
> How would you know a non-slop PR didn't use AI?
I don't, hence why I have to take your word for it.
The PRs that people have submitted where they either told me up front or admitted to using AI after the review probed as to why they would be inconsistent in their library usage were not good and required substantial rework.
Yes, some people may submit PRs that used AI and were good. But if so, they haven't told me but I would have hoped that people advocating it would have either told me or got me to review, said it is good, and then told me it was a test and the AI passed. So far that hasn't happened, so I'm not convinced it's a regular occurrence.
Maybe the problem with understanding the benefits of AI is that you are relying on other people to use AI properly. As the direct user myself, I don't have that problem.
I'm using it to make things better rather than just producing. Even just putting it in agent mode and saying "look at all my code and tell me where I can make it better" is an interesting exercise. Some suggestions I take, some I don't.
> Why search, wade through dubious results, etc when you can just instantly get the result you want in the format you want it?
For one, that way you can see that the source is dubious. Gemini gives it to you cleaned. And then you still have to dig through the sources to confirm that what it gave you is correct and not halucinated.