I think it is more externally driven as well, a prisoners dilemma.
I don't want to keep crapping out questionable features but if competitors keep doing it the customer wants it -- even if infrastructure and bug fixes would actually make their life better.
Last time I saw results of a survey on this, it found that for most consumers AI features are a deciding factor in their purchasing decisions. That is, if they are looking at two options and one sports AI features and the other doesn’t, they will pick the one that doesn’t.
It’s possible AI just seems more popular than it is because it’s easy to hear the people who are talking about it but harder to hear the people who aren’t.
I think this may have been Dell?
Dell reveals people don't care about AI in PCs (https://www.techradar.com/computing/windows-laptops/dell-rev...)
Consumers is nice, but far more important are the big corporate purchases. There may be a lot of people there too who don't want AI, but they all depend on decisions made at the top and AI seems to be the way to go, because of expectations and also because of the mentioned prisoner's dilemma, if competitors gain an advantage it is bad for your org, if all fail together it is manageable.
>It’s possible AI just seems more popular than it is because it’s easy to hear the people who are talking about it but harder to hear the people who aren’t.
I think it's because there's a financial motivation for all the toxic positivity that can be seen all over the internet. A lot of people put large quantities of money into AI-related stocks and to them any criticism is a direct attack on their wealth. It's no different from crypobros who put their kids' entire college fund into some failed and useless project and now they need that project to succeed or else it's all over.
I’m not sure that really explains how people get onto hype trains like this in the first place, though. I doubt many people intentionally stake their livelihoods on a solution in search of a problem.
My guess is that it’s more of a recency bias sort of thing: it’s quite easy to assume that a newer way of solving a problem is superior to existing ways simply because it’s new. And also, of course, newfangled things naturally attract investment capital because everyone implicitly knows it’s hard to sell someone a thing they already have and don’t need more of.
It’s not just tech. For example, many people in the USA believe that the ease of getting new drugs approved by the FDA is a reason why the US’s health care system is superior to others, and want to make it even easier to get drugs approved. But research indicates the opposite: within a drug class, newer drugs tend to be less effective and have worse side effects than older ones. But new drugs are definitely much more expensive because their period of government-granted monopoly hasn’t expired yet. And so, contrary to what recency bias leads us to believe, this more conservative approach to drug approval is actually one of the reasons why other countries have better health care outcomes at lower cost.
Currently if someone posts here (or in similar forums elsewhere) there is a convention that they should disclose if they comment on a story related to where they work. It would be nice if the same convention existed for anyone who had more than say, ten thousand dollars directly invested in a company/technology (outside of index funds/pensions/etc).
A browser plugin that showed the stock portfolios of the HN commenter (and article-flagger) next to each post would be absolutely amazing, and would probably not surprise us even a little.
The perception may be that anything AI related will be obsolete in months. So why pay to have it built into a laptop?
I doubt obsolescence anticipation has anything to do with it. That’s how tech enthusiasts think, but most people think more in terms of, “Is this useful to me?” And if it’s doing a useful thing now then it should still be doing that useful thing next year as long as nobody fucks with it.
I would guess it’s more just consumer fatigue. For two reasons. First, AI’s still at the “all bark and no bite” phase of the hype cycle, and most people don’t enjoy trying a bunch of things just to figure out if they work as advertised. Where early adopters think of that as play time, typical consumers see it as wasted time. Second, and perhaps even worse, they have learned that they can’t trust that à product will still be doing that useful thing in the future because the tech enthusiasts who make these products can’t resist the urge to keep fucking with it.
That’s because so much experience with ai is completely crap and useless.