One fascinating thing about the whole AI phenomenon is how incredibly hostile it is to _standards_. Whether something works properly, or is ethical, or is true, no longer matters at all; all that matters is "pls use our AI".
Microsoft spent literal decades rehabilitating their reputation. And then set fire to the whole thing in an offering to their robot gods.
And it's not just them. There was a time that Google cared deeply about UX. Now, on macOS Google remaps CMD-G in Google Docs to launch some LLM bullshit (EDIT: huh, they may have fixed this; it was definitely doing it a couple of weeks ago), because, after all, it has only had a standard universal meaning on macOS for about three decades, no big deal.
It's a complete takeover of technically incompetent management that feels like it can finally execute their ideas to the fullest instead of relying on those pesky swengs with their obstructions, complaints and problems. We'll soon get the management utopia everywhere.
Principal engineer balks at bad UX when the PM should know better (it's their job)
2023: Ah well I guess we can't do it
2025: you're fired. Hey kid we hired two weeks ago, implement bad idea please
To be fair, it was already done by bad managers long before.
I saw a trend of UX/UI designers coming with practice which I knew better were wrong. But they insisted. E.g hijack brosser native controls.
Will never know whether they passed along some manager/PM commandements or were just incompetent.
> But they insisted. E.g hijack brosser native controls.
[Rant-Example] The goshdarn ticketing-system hijacks alt-f, so that instead of opening the File menu of my browser, and instead toggles the favorite-status of whatever ticket I happen to be viewing.
A mistake was made early on even letting web apps see keystrokes like that. In a better world, modifier keys were used in a principled way from the start - only the window manager gets to see meta-anything, only the shell or GUI app gets to see control-anything, and web apps can work with alt-anything.
Have you tried creating a ticket complaining about this?
Press alt f to pay respect with the "respect my AI.thority" subscription
To be fair, the native browser controls have had too many quirks and features fox UX/UI consistency.
Corporate needs their Brand™ look precisely as specified in their expensive Style Guide. IBM wouldn't want the Google vibes of Android Material Design TextFields, I imagine.
Scratch beneath the visuals, and starker technical differences appear.
Safari on iOS (used to?) has a 350ms debounce delay on every tap / click, in case you want to do a multitouch gesture.
JavaScript (Frameworks) were the only way this arbitrary delay to user input could be reduced before 2015, when Apple finally released a native API for this.
https://webkit.org/blog/5610/more-responsive-tapping-on-ios/
> To be fair, the native browser controls have had too many quirks and features fox UX/UI consistency.
Well, too many to have a single website be consistent across browsers.
But as a user I'm using one specific browsers, and I expect all websites be consistent for that browser.
With some resistance. Now they do it far more often.
2026: you're fired. Hey Claude, implement bad idea please
That is a great idea, very inspirational!
Do you want me to implement another bad idea, too?
That's how I got my first opportunity 20 years ago
Don't hate the player hate the game I guess
:'D
It wasn’t AI that brought us Apple’s gray on slightly-lighter-gray UI standards, nor the 10,000,000 ••• menus that have infested every webapp in the past 10 years as an alternative to thoughtful UI design. We humans made everything shitty before we made AI.
> Apple’s gray on slightly-lighter-gray UI standards
It's a tangential point, but I turned on System Settings -> Accessibility -> Display -> Increase Contrast (the on/off option, not Display Contrast) and now at least the windows are outlined sharply.
The "Differentiate wihout color" is one I like. All of the on/off sliders now have a 1 or 0 to indicate on/off
OMG this is wonderful! Thank you.
A lot of people who think of themselves as able-bodied never think to poke around in the Accessibility sections of their settings menus. But it turns out that accessibility options are for everyone; people should really think of and evaluate them as first class tools more often
Or,are we just getting older and these things suddenly matter?
Nah, one of the things I found in Discord's accessibility settings is an ability to turn off or reduce animations and other visual effects by default, which is wonderful no matter your ability.
Possibly a factor, but I also think these issues are becoming much more widespread, leaving us less able to tolerate them than when they were less common.
A button looking like a button isn't an age (of the reader) thing.
Of course it is. What should a button on a screen look like, after all, it has absolutely nothing to do with a large mechanical button from the 80s the old designs tried to emulate. In fact, such buttons are becoming rare even in the physical world, the younger generation is more and more accustomed to touch buttons for operating all kinds of machinery around them. So "like a button" is very much an age thing
Looking like a "touch button" is still looking like a button. Some indication that an element is tappable is still useful.
[flagged]
They really should just have a single checkbox, "Prioritise usability over wank", and leave it at that.
Good thing we trained our fortune teller calculators on all that historic shittiness!
Maybe, but at least the 10,000,000 options were there instead deemed that they are not to be used by those pesky users. And now its they are not just hidden. They are simply not there.
Guns and bombs also didn't create war. But they did made it way more lethal.
It makes perfect sense / there was that talk by the ex-Google CEO Eric Schmidt saying something along the lines "imagine you could develop the software, but without that arrogant programmer". They just hate people, that's all.
Some time ago my then project owner remarked that possibly in the future apps won't require an UI and people will just interrogate the LLM directly.
I read that as a sign to make a coordinated exit.
Truth be told our project was one of many "catalogue of stuff" kind of apps which at this and projected scale could have well been a spreadsheet in the cloud with search enhanced by LLM.
100%
This AI boom is not a boom because its good for developers or users. It's a boom because it's a management dream; the promise of pumping up growth while reducing expensive workforce is simply too good for them to not throw decades of platitudes and "best practices" out the window. When people point out where AI fails, they're not seeing past the end of their nose. They don't realize they're not the real customers. It is leadership with millions in buying power who are the customers, and they're the same ones who only ever cared about managing the perception of success and growth; your clean code and user-focused development practices didn't matter to them back then and they certainly don't matter to them at all now. When it comes to an absolute state of garbage products and software, we still ain't seen nothin' yet.
To be fair, most of our industry is so stupendously bad at executing that you can keep growth and save costs by simply laying people off. No AI required.
That is true. AI reveals a festering problem.
Bring on the feature creep and epic down time
On the other hand, no one to place the blame on if management does it themselves.
The recent cases of companies who deleted their prod DBs while using LLMs are blaming “the rogue AI”. So it seems you can just blame AI lab companies and folks roll with it. Even better, they asked it to generate its own apology, no need to spend time trying to explain to your customers why everything is gone
That's definitely not true.
There's always people for management to blame. That's the great part of being management.
By definition, there's someone/thing you're managing that you can pass the blame onto.
Perennial HN trope: all bad tech evolutions are management's fault. Engineers are flawless paragons of technical purity.
Hard to blame the engineer when the engineer gets fired for not implementing management's whims. As much as I'd like to hold people accountable and say they should just accept getting fired instead of compromising the ideals, the truth is I've got a family now and if they paid me enough I'd do the same.
The torment nexus was built by engineers. Not management.
It couldn't exist without engineers.
> The torment nexus was built by engineers. Not management.
Before the more recent wave of successful tech startups (say, from 2010 on), a very large amount of programmers were incredibly sensitive to anything related to topics like (posisbility of) surveillance, privacy, authorities (including government), centralized infrastructures, DRM etc.
In my feeling, the only reason why this mindset shifted is because from this wave on, in the USA, programmers were showered in money.
The interesting question rather is: now that tech companies want to become more frugal with respect to paying programmers, will the mindset among programmers shift back or not?
Only because murdering your project manager for terrible ideas is illegal
And engineers couldn't get rich themselves without the billionaires shelling out for them to build their torment nexuses.
I want to get rich too. I want to live a good life, and provide for my family. I don't want to just survive. So I can't say I don't empathize.
That's fine, just know that you permanently forfeit any right to complain about others doing things for personal gain that indirectly harm yourself.
> I want to live a good life, and provide for my family.
This is a lie you're telling yourself, you can do both just fine without building the torment nexus. Billions of people do so indeed.
> I want to get rich too.
You should've stopped here, but then it became too much so you had to resort to appending that nonsense. It's pure greed at the cost of everyone else, that's all. Simple lack of morals, impaired empathy and remorse.
> you can do both just fine without building the torment nexus
Doubt. You don't become truly wealthy without doing what sociopathic CEOs do on a daily basis. Society actively rewards that stuff, and it's only getting worse with time.
> Simple lack of morals, impaired empathy and remorse.
Sounds like a winning strategy to me. That's the exact sort of person this world rewards.
Things are not looking good out there. Billions of people get by without compromising? Billions of people live in poverty too. Not something I'm looking forward to dealing with, should the great AI replacement ever come knocking on my door.
Which would be fine if the only two choices were build the torment nexus or starve. But it's not the only source of income out there.
Yeah, maybe you won't "starve"... But will you live? Or will you merely survive? If that?
It's not looking too good out there. We've got trillionaires bragging to people's faces about how they're all going to be replaced by their AIs. It got to the point someone threw a molotov into one CEO's home.
Source of income? The promise of AI is to literally make all humans economically redundant. In a capitalist world, what is the point of keeping economically useless people alive? People who do nothing but cost society money? Why not turn them all into soylent instead?
If we don't create a post-scarcity society now, I'm not sure we ever will. Choices aren't looking too good out there.
Right, workers build the world. We should run it. Actually. Why does management get to tell us what to do without elections?
Workers are necessary but not sufficient for most businesses. You also need capital. This can be provided by the workers and is for many worker owners businesses, but when the business is very capital intensive that's just not feasible.
Are workers going to be able to fund Apple's factories or ExxonMobil's oil exploration? No, so they're not in charge.
You absolutely can start a worker owned business right now, or go work for one.
Of course there are shitty engineers, but they aren't allowed to do anything without shitty management.
Remind me who makes the final decisions in these scenarios. Also, how do boots taste?
Aren't you guys glad there are no programmers gatekeeping programming with their "morals" and "etiquette"? Any marketer with an LLM can update the programming tool now. AI really levels the playing field and it's time for pesky programmers to get off their high horse, don't you think? :)
Come off it. Sure some of them had "morals" but a decent chunk of them just lacked the imagination or connections to monetize their lack of morals.
after 2+ years of non-engineers vibecoding applications, show me one startup/app without devs.
> Microsoft spent literal decades rehabilitating their reputation. And then set fire to the whole thing in an offering to their robot gods.
Probably they thought the new generations forgot about how awful they were in the not so distant past.
I think they set it all on fire because greed got the better of them again.
> greed
Is a greed/not greed scale really useful to discuss company behaviors ?
I wanted to say I get what you mean, but even thinking about the company I root for the most, I can't think of a point where they're not driven by their desire to make a lot more money.
If your point is that there's good and bad ways to seek money, I'm not sure it's properly encompassed by "greed", which I interpret as the intensity of a desire, not its nature or validity.
To you "greed" might mean something else, but is it properly conveyed ?
maybe long term vs. short term is the key idea. apple, for example, could rake in bountiful measures in the short term if they ventured away from their boutique-electronic-consumer-goods niche. in the long run it would hurt their bottom line to do so
Approximately everybody would like more money.
Greedy people put the desire for more money above the welfare of the business, themselves, and other. Greedy people literally put their desire for more personal wealth above the very lives of others.
Greed/not greed is a very fair way of putting it. One can operate a business that requires profit without wanting to destroy everyone and everything that stands in the way of more money.
I think there's one more factor that is crucially important — greedy people lack long-term vision, and care a lot more about money now than they do about potentially much more money in the future.
I suppose it's kind of interesting that you could measure greed as an unusually high discount rate for the time value of money?
> Approximately everybody would like more money.
For me (and many others), money is a means to an end. I don’t want money per se, I want housing and food and things that money can buy.
But for a few, money is the goal. They want money for the sake of more money. They don’t need more. That’s greed.
> Greedy people put the desire for more money above the welfare of the business
In my experience, it's much simpler.
People are greedy if they make things I want cost more.
The Seven Deadly Sins provide an interesting perspective to human psychology even in modern times. Greed / avarice is defined as wanting more than you need.
> Probably they thought the new generations forgot about how awful they were in the not so distant past.
More likely, never learned about it in the first place, save a few whispers. Who's got time to go digging in deep, when there's 'experiments to run, research to be done' ...
> I think they set it all on fire because greed got the better of them again.
new blood, new greed
Whomever at Microsoft is making these decisions and oversees all this, yeeeesh
Isn't that just like.. what Microsoft has always been? Browser wars, Tay, bad behavior around open source software.. This is how they roll. They're being their best selves.
The difference
(Previously) Microsoft EVP: "Dumb decision" -> org executes
(Now) Microsoft PM: "Dumb decision related to AI" -> team immediately executes
So they've pushed bad decision making down the hierarchy?
That's a good point, but literally every company I know of is doing that rn. They're still doing it in a distinctly Microsofty way.
Tay turned out poorly, but it's a strange inclusion. It was simply a research project that failed.
Thank you for this. I completely agree. Microsoft has always been awful, and the likely always will be. However, the did strike gold a handful of times, and they are just reliable enough to feed enterprises.
Apple, Oracle, Adobe, Google, IBM, Microsoft, etc... All the established players have their own distinct flavor of awful. This incident is just a very on-brand flavor for Microsoft.
AI psychosis. Divide between rich and poor. They live in their own golden bubbles and there's no sanity checks. The workers are so far removed from the realm of competentance and influence it's just CEOs and VPs trying to pump the next 6 months stock value regardless of anything.
It's like the zeitgeist has decided the only thing that matters is their own farts and how they dont smell.
>There was a time that Google cared deeply about UX. Now, on macOS Google remaps CMD-G in Google Docs to launch some LLM bullshit
That reminds me of a few years ago when Android phones replaced the behavior of "long press sleep/power button" from "shut down" to "ask AI about what's in your screen". Perhaps a manager got promoted somewhere for "raising AI usage" in Android phones.
[dead]
The industry spent decades preaching us about power savings, with Microsoft settings application lecturing about power saves and the update app programming them on renewables peak, only for... wasting gigawatts by forcing us to have copilot everywhere.
If Microsoft were consistent, which isn't, power saving mode would disable AI features.
They asked developers to help them improve windows battery life on laptops, competing against chromebooks and macbooks.
The AI gigawatts are all in data centers.
They never cared for the environment (in this way, at least).
Windows still asks you to reduce the refresh rate of your monitor from 240Hz to 60Hz in order to save the environment.
Mine dont
In literally must have missed that. When did Microsoft ever encourage energy saving? Is this related to power saving for extending laptop battery runtime? But then I don't get the link to renewable energy.
Anyway, I agree with the notion of the extreme energy-inefficiency of LLMs. The scale of it makes it hard to imagine any less efficient product will ever be invented.
They literally have a green leaf next to power saving options. Also, there's an option in windows upgrades to time the upgrades to when the grid is mostly renewables.
Apparently since Windows 11, Sun Valley 2 update ("22H2"?) back around 3/2022.
https://www.pcgamer.com/windows-11-update-will-help-your-pc-...
https://blogs.windows.com/windowsexperience/2024/04/22/reduc...
I don't think anyone at Microsoft truly understands how much they have ruined their reputation. This won't be fixed again by open-sourcing a few tools. Fool me once, etc.
I will fight against any Microsoft tooling being used at every company until I die. This is unforgivable.
When I've been working on stuff that requires a SSO login, I noticed that it makes, what I considered, hostile anti-user choices in defaulting to tracking pieces of information I didn't want to track and hadn't mentioned.
Fair that I didn't instruct it explicitly to make more pro-user choices, it just seemed to think slurping as much information into the backend was an default intention. Wasted a few more tokens to iterate on it to remove things, but it was IMO interesting enough that I finally submitted feedback around what I imagine is an interesting training problem.
If you're using Claude, try /grill-me before getting it to start working on things.
Has always been the case. Corporations hate standards and would rather lock you in except where market forces prevent them. It was a miracle we have something like the internet - and the government had to create it.
Microsoft's decade-long PR rehabilitation has worked wonders for them.
> Microsoft spent literal decades rehabilitating their reputation. And then set fire to the whole thing in an offering to their robot gods.
When did this happen?
When they started embracing and using Linux, WSL is pretty good. But it doesn't completely wash out it's past.
Dotnet core was also a move in that direction with large portions being open source.
> Microsoft spent literal decades rehabilitating their reputation.
Mmm... I think I missed that part.
Not everyone bought it, but they campaigned hard...and now see it was all just a dog and pony show. The hold-outs were right...
Not really. A company is not one monolithic entity with a single will. Far more plausible than "it was all a trick" is that for a time, people were in charge who really were trying to improve things, and now, those people have been replaced with others who are willing to burn it all down.
Before 2010 or so, “serious” internet developers wouldn’t touch Microsoft stuff — Microsoft was for office memos and poorly structured spreadsheets and that was it.
So yeah, Azure being a real option at the highest levels of internet-scale operations is a turnaround from where they were.
That’s not an accurate take. Microsoft has had a monopoly on the PC desktop OS. Anyone writing applications for users was targeting Windows and using Microsoft. To call most of these developers “not serious” is quite and overstatement. This includes all PC game developers, DAW, CAD, Adobe…?
Azure expanded the Microsoft franchise, and provides another prong to their whole integration story just like cloud AD services and online Office 365 provide another way to stay integrated into their ecosystem.
Yeah, they needed to work on their image somewhat, but their image never negatively impacted them
> Anyone writing applications for users was targeting Windows and using Microsoft.
Developers as users, sure. MSFT was common. Developers as responsible for infrastructure, MSFT anything was considered a huge risk and unreliable in the 90s.
Granted, my memory retains only a general narrative...I remember a shift by 2002ish when I started to see windows servers as perfectly fine machines for closet/under-the-table infra you didn't care too much about anyway. By 2004 they were moving out of the closet, so to speak. Then those machines became more important because more was being done with them and were considered "just as good" as any other OS. Developers that had experience, with their MSFT certs in hand, were cheaper too. It was a slow progression to eat into the corporate marketshare. By 2006 virtual machines were ubiquitous and you could run MSFT virtualized. Many companies do that by default today for workspace controls. I have never and would never choose to use MSFT products (including Azure) for business critical infra. MSFT acquiring Github was great for them, and the death of it for me. I'm probably an old outlier, but I 'member.
> PC game developers, DAW, CAD, Adobe
Right, those are all desktop applications. Microsoft has long owned that market.
I said “internet developers” meaning web sites, servers, apps, etc. Microsoft’s early offerings in that space, plus all the pain they inflicted with Internet Explorer, is what took years to overcome.
As an MS dev at the time: MS missed The Web and Mobile, thinking Office would be enough. Everything since is catchup.
On the one hand MS was a web pioneer — asynchronous web calls and ActiveX technologies that were surprisingly capable — but these were peripheral to their main goals.
Instead of MS extending their unified development platform outwards, something .Net promised to enable, effectively the opposite happened. .Net chased Java, but Java was being pushed out by Ruby on Rails. .Net web starts chasing RoR, but then Node is getting cool. .Net Web starts chasing Node and that effort splits .Net into uhhhhh ‘Framework’ uhhh ‘standard’ (ie Old-and-working), and .Net Core (what a container based web stack VM needs to look like).
The problem at that point, IMO/IME, is that Node is JavaScript, and those awesome server-side geniuses dump too-easy tooling while recreating every problem of every stack ever (ie LeftPad, loosely goosey versioning, and NPM being a crypto hackers wet dream). The .Net that started as Enterprise Server Stuff is now kinda sorta ‘Whatever’ about versioning, stability, roadmaps, and platform planning. Everything from DataAccess to GUI was churned needlessly for almost a decade, and everyone using that platform looks and feels like an a-hole because huge swaths of MS tech is abandonware resulting in perpetual rewrites of recent-term work and silos of competence.
No one can explain what framework to use to write a basic windows application anymore… Office uses React, and Windows does too… the fat cats who made MS into M$ knew better than that, the M$ who chased cloud growth and cut staff for stock price has never cared.
They went from demonizing open source software to buying GitHub, releasing their own open source software (including VSCode), and hosting Linux on Azure. Huge changes! But of course it ends up being another Embrace and Extend move by the masters of that tactic
Hackernews used to experience a collective paroxysm of joy every time a new Visual Studio Code dropped. There definitely was a pervasive belief that the Nadella era ushered in a cuddly new Microsoft.
I remember a time, way back, around 2010 maybe?, where Microsoft was referred to as "M$" in this place and generally perceived as an evil corporation o.O
Most likely more a difference of venue. I saw lots of that on Slashdot. Less of it on Digg or Reddit. Virtually none of it here, but it seems to be making a resurgence in the form of "Macroslop" and related epithets
Lol, yep! That actually goes way back before 2010. It probably started in the early 90's, at least
Yup, and windows was generally called 'mustdie' back then.
Both things can be true. VSCode did help us get to the point where I can use it on Linux, MacOS, or Windows and have a lot interoperability. It's the typical cycle. All it takes is a couple people to get their hands on managing the code to turn anything into garbage.
This was later—into their We U+2764 Open Source era. M$ and stuff dates from like the mid-late 90s. In the late 2010s was when they started publicly acknowledging that open source exists, acquiring GitHub, and releasing things like .NET Core and Visual Studio Code, and a lot of people in the open source camp did a "pointing soyjaks" and forgot that the Halloween Documents existed and that EEEing open source was already in their playbook.
[dead]
Remember “Microsoft <3 Linux”
I tried my hardest to block that out of my memory. Everyone knew their fingers were crossed behind their backs.
I think it's true though. They don't care about Windows anymore, that's plain as day. Most of their software is now cross-platform. Who cares about Windows if you are selling Azure instead and people can run Linux on that?
GMAIL in the web is so shitty, I literally switched over to another provider. I don't know how anyone can use them as their webmail client. You can't make sense of longer mail threads with forwards, answers etc. in between - it becomes an unreadable hot mess.
Would you tell us which provider/client you switched to?
They invested billions. They're scared.
> They invested billions. They're scared.
They could have shipped a good product with all those billions they spent in reinventing Clippy.
I have this feeling that their bet was that all the Microsoft shops will jump on Copilot without looking at alternatives, so they did not really have to make it as good as their competition.
"good" is not important for software anymore, at least in the regular consumer market. Companies have discovered that people will just continue to accept subpar, unfinished and sometimes even partially-functioning software.
"accept" is such a weird word for this, though I don't know of a better one in English.
What we seem to be experiencing is a combination of monopoly power/abuse, and regulatory/government/court capture to keep it in place.
if internet comments are any kind of indication (which they very well may not be) I've seen lots of people complaining about win11 but remaining because they can't give up playing their favorite online hero shooter. That's acceptance to me
"tolerate" would be the better word to describe it
Agree that acceptance is irrelevant. No one has a choice, because all the “competitors” in any given niche (phone, cloud platform, PC operating system) are executing the same play. Enshittify, extract profit from ~suckers~ customers, ignore any churn because with the limited choices available there will be new suckers to replace them.
We accept this the same way we accept the air quality wherever we are.
Yes, Linux is there, but consider the barriers to the average person of truly adopting a strict Free Software life. Consider how many things in life now simply demand for you to have an Android or iOS phone. Things as simple as parking.
Well, now no one has to convince anyone to shell out for upgrades because everything is a subscription. What worked perfectly well can now get replaced out from under you overnight
Making good products simply no longer seems to be on the agenda for most of these companies.
Making good products was never Microsoft's MO. Even during the peak of the Nadella era, the good bits were side shows. Microsoft Office and Windows have always been things that succeed primarily via network effects/lock-in.
> They could have shipped a good product with all those billions they spent in reinventing Clippy.
I really liked Copilot - it gave you a lot of tokens across a bunch of models and their agentic features were perfectly serviceable, alongside it being really affordable! And then they moved over to usage based billing and it no longer has that advantage over the alternatives: https://github.blog/news-insights/company-news/github-copilo...
I still think they have a really good AI tab autocomplete implementation and it's nice to be able to use that in VSC without swapping to another editor altogether... but that's not enough to really make me pay for their subscription. I could probably move to Zed altogether if I had a problem with VSC itself, though at least the base editor doesn't feel like it has been enshittified and I quite like it, all things considered.
Microsoft continues to make billions in profit despite its spending on AI, because it has a diversified business that generates revenue. I don't get why they would be "scared"? It's basically a calibrated risk at that level.
[dead]
Good products are not profitable enough. Not that good products are profitable at all, but if it doesn't make disgusting amounts of money this quarter it's not worth considering at all.
We've reached the phase of "infinite shareholder growth" where physics says no, and that is so unacceptable that we'd rather burn down the entire global economy than accept less than exponential growth. It isn't that growth is impossible either, there just can't be enough growth. Break-even is apparently a fate worse than death
The formulas used for asset valuations blow up when growth turns negative.
> They could have shipped a good product with all those billions
They did. It's called Azure: https://www.geekwire.com/2026/microsoft-tops-wall-street-exp...
Not sure "good product" and "Azure" really belong in the same sentence.
Have you read this?
https://isolveproblems.substack.com/p/how-microsoft-vaporize...
I know a few people who worked on Azure’s FedRAMP ATOs, and “good” is not a word I’ve ever heard them use.
That's largely a product of work in the 2010s. What's their next Azure? Clippy on steroids probably won't cut it.
Their next Azure is the same as the next App Store and the next YouTube; they are services, you just keep operating them while they're in the green.
Microsoft's B2C reputation is undeniably burnt, but their B2B mindshare is unshakable.
the cloud used because execs have already got a microsoft contract. (not to mention the fun licensing problem)
Good thing they are holding the economy at gunpoint.
And they aren't the only ones! The bubble might be reaching it's size limits
They invested billions. They can exit in 6 months if this thing stays afloat.
I don't think it's fear; it's greed.
> There was a time that Google cared deeply about UX.
I’m sure Google cares very much about UX as a funnel into their ad brokerage, but was there some time when they cared about it in the user’s interest?
Maybe that magical moment when the results page showed the results first?
> And it's not just them. There was a time that Google cared deeply about UX
Are we talking about the same Google? They still haven't fixed Android gesture navigation after almost a decade.
The thing the annoys me the most (to use polite language) is that product design went off the window with the AI craze. You could probably ship actual products that actual people would want to use, but instead everyone wants to turn everything into a chatbot, as if chatbots are the pinnacle of user interface, the crabs of software, the purpose, goal, and telos of technology. It drives me nuts.
A text input field for entering your command line(s), with a text log for the output, does indeed seem to be the crabs of software. Usually with some abstractions that allow you to write longer scripts[1] and just refer to them by a short name or alias, and compose those scripts together from your command prompt.
You could say it's the terminal[2] user interface.
[1]: https://www.merriam-webster.com/dictionary/script
[2]: https://www.merriam-webster.com/dictionary/terminal
While this is very pithy, we need to acknowledge and remember that there's a gulf of difference between normal terminal interfaces and command line interfaces, and whatever the chatbots are doing.
Yes, both have a prompt where you type text to do things and get text back, but the type of text you write in one is very different than what you'd write in another. Prose versus commands and so on. Oh, and normal terminals don't waste electricity and water in amounts approaching small countries.
The only question is "number go up?": will this result in more money from investors or not?
Its even worse in my eyes, they dont even offer a model they themselves maintain.
The entire selling point is "you no longer have to conform to standards in input to get usable output"; why would they conform to standards in output, or in process?
Yeah, even .NET is now plagued with AI, see AI dashboard on Aspire, AI components on Blazor, .NET upgrade assistant now being AI agent,....
VSCode hasn't yet been rebranded into VS CoPilot by pure luck.
> And then set fire to the whole thing in an offering to their robot gods.
It's the bourgeoisie dream: A means of production that also does the labor 24/7 and can't complain, infinitely spawnable. Theoretical slavery+, so of course they're throwing everything into the furnace for it.
These next few years are the real turning point. If they are right about AI and robotic workforces, then it's checkmate--they don't need us anymore, and we're next for the furnace. If they're wrong... well, I don't know... Will there be any consequences? Maybe a few people lose a few percent of their net worth.
The AI tool providers need companies and customers to pay for the tools and automation. If all the white collar jobs in the Western world are replaced by AI or AI generated SAAS products, some 60% percent the workforce suddenly won't have jobs. If such a large percentage of the workforce has no income through employment, who will be able to pay for the services from SAAS providers and thus ultimately the AI providers?
The tradesmen working on my house renovations aren't consuming SAAS products during their day jobs.
The white collar workforce can't rapidly switch to blue collar jobs.
So for these companies to remain viable, they need the white collar workers to still somehow end up with enough money to pay for services that ultimately the companies provide.
Maybe the turning point will be a recognition that companies can't only focus on maximising shareholder value. They also need to consider their role in maintaining and improving the societies they operate in.
There will always be jobs for private security, firefighters, and utility repairmen to protect / restore the data centers when people inevitably attack them.
There will be a period of rapid change. If we are lucky, the political class will see and adjust policy quickly. Otherwise we will see US urban areas gutted like the Rust Belt was after NAFTA / WTO. They are making the same mistakes but in a different industry.
Why will there always be these jobs, if the technofascists are right? They're creating enslaved sentience. Even the class traitor police want a union, fight for more pay.
What's uniquely un-automate-able about those jobs in their dream future?
Never underestimate the capabilities of a desperate human.
I don't think you understood my question.
Google will definitely lose. Llms supplants search. But not the old document search which they stopped doing long ago.
Add in the fact that open weight models are 6-12 months behind frontier models means AI companies aren’t building a moat, they’re on a treadmill. And treadmills don’t justify the valuations OR the hype.
AI companies are in trouble.
I see one profitable enterprise for AI that involves spying on everyone, managing their lives (or otherwise) tightly, automating foreign conquests and needing to make only the top decisions while delegating everything else, like a king. I can see a group or one could say a class of people that would happily invest in such future.
Exactly. I keep saying, AI is not useful to us. There will be no AI companies.
Even this supposed profitable enterprise, the people involved are absolutely too moronic to be able to control the thing they try to invent, it will just be a matter of time before it turns around and eliminates them as well...
Not all AI companies are the same.
Some are piling on masses of debt to built capacity (eg. Oracle). Others are just reinvesting the profits from the rest of their company (eg. Google, Meta).
Anthropic’s moat is their best tool, Claude Code.
OpenAI’s moat is the brand of ChatGPT, once the fastest growing app in the history of the world.
It’s possible that open weight models keep pace, but it’s also possible that the investment to train them becomes prohibitively expensive and open weight models cease to keep pace with the large foundation model companies.
I really don't think open models will lose. I think they are cheaper to train because they have to be more efficient than the monstrosities we have now.
There is no theory that says the current frontier models cannot exist in models with 1/100th the compute waste ;). When we start trending in that direction, and oh wow we truly are, there will be no reason for these services. You could run them on your own hardware without serious investments.
The moat openai and anthropic have is them among others have attempted to buy all of the computer hardware for the next two years. That's intentional. They know the only existential threat to them is anyone coming up with a way to do this better than them. It's already happened and it's going to become more and more divergent.
I’m interested in learning more about your theory that these models can be trained more cheaply. Is anyone doing it from scratch, rather than adversarial distillation?
It is a lot cheaper to train a 27b model such as qwen3.6 which you can even vibe code or agentic code with than it is to train a 1t+ parameter model. It runs on a single commodity GPU for goodness sake
It's not a theory. These smaller models that are coming out are huge advances for the field.
I can't comment on companies training practices. That would be proprietary stuff I guess. I think the claims that the advances being made are due to distillation alone are completely unfair. The advances alone are not just data.
It almost doesn’t matter if it’s trained using adversarial distillation - if it’s nearly as good, and one-hundredth the cost, the choice is obvious.
Open weight models will keep pace because capable open-weight models are China's strategy for preventing a closed takeover of AI by the West.
US megatechs stole copyrighted data to train their. Hyper expensive models.
Chinese megatechs stole copyrighted data AND trained their models on derivative / synthetic data that came from the US foundation models.
I’m happy Chinese foundation model trainers were able to use Huawei (homegrown) hardware to train their models (also because having Nvidia dominate that sector is terrible for competition), but if Chinese megatech companies are just deriving their open weights models from US companies, then this is just an IP theft exercise.
One of the double edge swords I see is devs/evangelists pushing agentic coding are playing the 'good enough' statement. If that is true and those asking for software can live with good enough AI code, the moment the free local models hit that level the party is over in the continual push to the premium tip of the spear models.
We might already be there. I've been running Qwen-3.6-27B with 8-bit quantization locally with llama.cpp (~100k context window), and to be honest for my use case, 40-50% of the time it is more usable than claude-code. I only have the $20/mo plan, so I often hit rate limits after 2-3 prompts. And while the local model is slower, it just keeps chugging, is practically free, and more often than not produces code similar to claude. I wouldn't be surprised if in 6-12 months we have local models which are comparable to opus 4.6...which I personally consider as a tipping point where agentic coding became practical.
What does their patent moat look like?
Google owns the core transformer patent(s), for one thing, e.g. https://patents.google.com/patent/US10452978B2/en.
I haven't read the claims, so I don't know how easy it will be to work around them. This particular one seems to cover encoder-decoder networks, so it's not necessarily applicable to later LLM implementations. But I'd be amazed if Google didn't have several other relevant patents in their arsenal.
I guess if they are wrong the world economy crashes and burn again, because they wasted all these shiny dollars on infra build out. It's lose lose.
Initially I assumed that when the bubble burst, some VCs would go bust, Oracle would go bust, a few hyperscalers would take a significant haircut but carry on, and life would pretty much go on. However there's now sufficient dodgy AI-related debt making its way onto the debt markets that the bubble burst could be a lot messier, and it may be more than a few percent.
Wouldn't mind a repeat of 2008, if it means that Oracle goes out of business.
> Maybe a few people lose a few percent of their net worth.
the entire US economy rides on this now so it’ll be more than few people and a lot more than few percent.
A few percent of your net worth, when you're sitting on top of a pile of gold like a dragon on a yacht is one thing, but when you're a retiree, and you're on a fixed income, living off the proceeds from an annuity and a reverse mortgage, and inflation in all its forms is eating into the plan you had, and you don't have any backup, yes there will be consequences!
LOL.
Robotics isn't even 1% of the way to replacing anything.
Consider why every neat demo is a backflip and not washing the dishes or laying bricks or something.
People (well, American people (disclosure, I am an American)), used to be scared/worried that Silicon Valley will eventually move to Bangalore or Shenzhen, because of wage-discrepancies, and so on -- and it is not a totally unreasonable concern, considering that the _Silicon_ part of Silicon Valley has been slowly relocated to Taipei, Seoul, Tokyo, and a few others. At this point, maybe we should start pushing that the _rest_ of Silicon Valley gets relocated somewhere else, too.
It's a breeding ground for Edisons and Morgans, not Teslas. It is profoundly depressing that SV is doing everything it can (knowingly or unknowingly, not sure which is worse) to get the entire planet to stop taking it seriously and to shun it.
No country would want them.
If you have worked in Silicon Valley you know that Bangalore and Shenzhen came here ;)
In all seriousness, the silicon is still designed in Silicon Valley but maybe you don't hear about that as much? Broadcom, Qualcomm, Intel, Samsung, AMD, Nvidia, etc. all have a huge presence there still.
I meant the actual fabrication of silicon ;)
Just to emphasize my point, China is not being deprived of chip _designs_ (via export bans of ASML-made lithography equipment), but rather of the actual physical machines that rearrange the atoms.
BUT it is a trap: https://arxiv.org/html/2603.20617v1
One things for sure I won't be buying any SaaS, streaming, or ordering from Amazon if I have no future prospects for work. I already stopped most of my subscriptions because of a layoff unrelated to AI.
We buy food and go for walks as entertainment. It's been refreshing but also obviously scary.
Didn’t get the “scary” part. I also keep my entertainment to the minimum dependencies possible. I try to rely on stuff I own: music cds, iso videogames + emulators, physical books or ebooks (thanks Anna), exercise outdoors… ditching streaming like netflix/youtube, buying crap on amazon, uber, etc
Scary = “if I have no future prospects for work”
It’s the combination of AI changing the workplace, the large techs shedding double digit headcount, recruiting / hiring departments being so broken by the AI arms race hitting job applications, and the macro business environment generally being on the downward slope at the moment.
Scary part is not having a job right now that's all. It's not scary walking around getting more vitamin d
Automation tax solves all the problems? Seriously? The tax would go to retraining programs, according to the linked paper, so that workers can be reabsorbed into the workforce. Undiscussed conditio sine qua non: the economy has room for additional workforce, the government - as the distributor of said tax - has implemented sufficient legislation into social networks to ensure the tax goes to these programs and not another pointless war or subsidies for agriculture or tax relief for the rich.
This paper proposes a solution for which the framework/base is missing.
This feels like the same mechanism for climate change. The actors dont care since they're not completely responsible for that outcome and benefit from ignoring it
Turns out it's not infinitely spawnable after all.
There's a lot of flaws with their fantasy world, that's not even the most prominent one.
Not that surprising when you consider the monumental investments. It's heinous but right in line with modern corporate business ethics.
Claude code not supporting specifying an alternate location to look for agent skills is another example.
Sent from iPhone
Wait, when did they rehabilitate their reputation? Before AI they were already shoving crap down our throats through windows 11.
Microsoft was making a big PR push to show everyone how they loved open source for a while.
What do you mean, there are many, perhaps too many, AI standards. MCP, SKILLS.md, A2A, two different ACPs, ECA.
This particular change feels... human driven.
The pile of money they set on fire is still burning and they are desperate to get returns before it burns out
AI is the ultimate grifting tool, grifters gonna grift.
5 years ago it was blockchain & NFT’s.
Same hypers just moved to different technology.
In my circles it literally was the same people. Instead of trying to get me to buy ETH they started talking only via LLMs. Unsurprisingly we aren't in touch anymore... Maybe they are happier with their chatbots, I'll never know that's for sure
I'm intensely curious, since you know they're grifters, why are they in your circles? I guess maybe you don't mean circles the way I'm thinking and more the whims of algorithms?
No I mean social circles
Because I am too nice and even though every conversation had an element of grift there was still a conversation. Most of them are lost, or struggling with their identity. Yes there's some greed but half of them just want to fit in somewhere and they aren't technical geniuses despite loving technology. I like people like that, of course with out the grift.
That said we don't keep in touch anymore. I do miss them though. I'm something like an abused dog that has seen too many things in their life to not look past all ugliness and see someone's inside. I hang around a lot of hurt people because í want them to have a safe person they can come to if they choose to heal.
Wow that's personal. I should stop posting here and go find some new friends.
Thanks for sharing.
People get sucked into all sorts of schemes or ideas.
I never said grifters but a fair share of my social circle pumped crypto’s/nft’s when they bought some(small amounts but whatever).
Same people just can’t shut up about AI/LLM’s. I don’t care your LLM helped you generate an outlook email address export tool when a quick google reveals outlook can export the email address natively with just a few clicks.
All of the "carbon credit" guys I know are now all in on AI with zero sense of self awareness.
> All of the "carbon credit" guys I know are now all in on AI with zero sense of self awareness.
Some people made a lot of money off of those platforms. Everything was a nice story, but once you dug just a wee bit... smoke and mirrors.
There were definitely honest people trying to make a difference but they were unfortunately _vastly_ overshadowed by grifters.
Yep, 25 years ago it was the web. And remember the great electricity grift 100 years ago. And horseless carriage grifters like Ford!
Yeah, you probably said web3 was going to change the web too.
Don't stick your head in the sand just cause one fad didn't play out.
I’m not, I’m presently underwhelmed by the examples everyone shows.
I’m yet to see actual productivity result from people paying to talk to chatbots to generate boilerplate.
But I tend to shy away from hypers so the LLM craze is passing me by. I have seen uses of AI/ML that helps recognise objects in images which I have seen it do OK at(and it should because it’s the same image just 10m down the road). A human then reviews the outputs. It also spits out highly inaccurate outputs fairly often that the human is necessary even with a feedback loop.
See how fast so many of the crypto and NFT/Web 3 lot shifted to AI, like rats on a sinking ship.
I think VCs saw Crypto and dreamt of being able to create the same amount of irrational value. AI has the same technical complexity "You can't easily explain it in a single sentence" energy but unlike Crypto and NFTs, enough actual utility to not seem completely illegitimate. It literally is the perfect hype grift tool. Crypto has survived almost 20 years off of nonsense, how long can this crap last. sigh
If you still think crypto and AI are nonsense, then I guess you will carry these beliefs the rest of your life, but these beliefs won't outlive you, as they have no relation to reality.
I said AI has utility but drives irrational levels of investment. Crypto has little utility besides a place to gamble, con credulous people and otherwise act as a really shitty store of wealth.
Most modern crypto projects barely bother to promise to do anything useful let alone achieve anything useful, which the overwhelming majority do not.
These aren't beliefs but statements of fact.
Those things you listed have lots of utility. Gambling is one of the most lucrative industries to be in.
That's very uncharitable. Crypto has been extremely useful for all sorts of grifters and enabled separating fools from their money at true web scale.
Indeed, it would be difficult for Iran to receive payment for passage through the straight or Hormuz without crypto or fir North Korea's ransomwwre economy to be so lucrative.
[flagged]
Ok. Well have fun. Bye.
Who is investing in NFTs today?
Who is building their company using permission-less blockchain as the database? The average person still uses a bank checking account, not replacing it with a crypto account.
I haven’t heard of any progress on tokens in the Governance direction.
Stablecoins without a public audit trail have so far stayed relevant, but there are several which are suspiciously reminiscent of the mistakes that SBF made.
We all see the transfer of funds and the ostensible store of wealth when it comes to buying influence or presidential pardons. Those of us not wearing crypto-colored glasses don’t see the promise that VCs sold us on the industry 5-10 years ago.
I never spoke about NFTs nor do I have to speak about them, not today and not ever, so save your bait. It's in the same way that you didn't speak about bank bailouts, so I won't bait you into it.
Most people obviously use multiple accounts of different types. Those who have crypto wallets will never reveal them to you in the interest of their privacy.
Stablecoin firms make so much cash via interest that they're easily over-capitalized.
If you're foolish enough to be manipulated by VC interests, that's your own fault. I would focus on the tech, not on what VCs want you to believe. This applies generally, irrespective of the sector. I don't know why this is hard to understand.
NFTs are stupid. But I have a feeling as governments default on their debt and economies collapse in the next few decades cryptocurrencies will be of increasing importance.
Cryptocurrencies are now useless, considering how openai and similar companies have enough compute to highjack them and the AI thing might not work out at all…
I want the drugs you’re on :D
Thanks for your deep and clearly thought out reply.
That's 100% nonsense.
Thanks for putting the time to articulate why the AI GPUs cannot possibly be used to fork the blockchain and obtain the majority.
> all that matters is "pls use our AI".
If you look at the staggering amounts of money that have been put into the tech, this attitude becomes practically mandatory, in an inhuman sense. They have to get ROI, at literally any cost. And it shows.
> Microsoft spent literal decades rehabilitating their reputation.
TRYING to rehabilitate. only fools fell for it
> Microsoft spent literal decades rehabilitating their reputation.
"Decades" is a stretch. There was a brief window around the Windows 7/8 era and then, like a dog returning to his vomit, they returned to their user-hostile bullshit. Windows 11 is the culmination of that, but Windows 10 was plenty bad. Remember how Windows 10 made Solitaire a subscription service? Sticking copilot into everything is just more of the same.
> Microsoft spent literal decades rehabilitating their reputation
Which literal 20+ year period was that?
What did Command+G do in OSX? Online results are saying it "advances to the next search result after doing find". In other OS', that's just the enter key, if I am understanding the context correctly.
In MacOS it advances to the next search result _even if the search widget is not currently open_.
> There was a time that Google cared deeply about UX
Have we been using the same Google?
Their search homepage was supposed to be minimal. I was at a tech talk given by Google sometime around 2012 and they said that their ad service is not under any circumstances allowed to slow down the page load - if the ads don't return before the page is ready the pager is rendered without ads.
Chrome had so many great ux choices originally, such as tabs all staying the same size when you were closing them so that you could close multiple easily and only resizing after a second or two (that stopped working around a year ago). Hell there are even rumours that Chrome is called Chrome because it was a polished UX.
Their original products were so smooth compared to what was there before. Search compared to altavista, mail compared to Hotmail, both compared to Yahoo!. I really don't know where your perspective comes from. GCP?
If i remember chrome:// used to have special meaning in Firefox (and probably well before that), and was used to tweak UI settings. I always assumed this was where Google took the name from.
Chrome is a now-somewhat-archaic term for GUI (or specifically the actual elements of the GUI, not the concept), and Netscape/Mozilla did use the term a lot. Google claims that their browser is called Chrome because of an association with fast cars (presumably Google was keen to market it to extremely old people, chrome not having been a particularly big thing in cars for a very long time).
> Google claims that their browser is called Chrome because of an association with fast cars
FWIW, before Google Chrome, Firefox was originally Firebird (changed for name collision reasons), and Mozilla had broken off the rest of the Netscape-ish "communications suite" into Thunderbird, both arguably named after cars.
Besides the use of chrome by Netscape/Mozilla that you mention, roughly around that time I heard it used by HCI people to refer flashy GUI design for cosmetics rather than function, and specifically to changes in a particular MacOS version.
I wonder whether Netscape/Mozilla jokingly then used it as a term for the GUI toolkit "trim" around the browser page. Given that this was a transition to the important stuff being on the Web page, rather than your computer. And/or whether Google did.
> FWIW, before Google Chrome, Firefox was originally Firebird (changed for name collision reasons), and Mozilla had broken off the rest of the Netscape-ish "communications suite" into Thunderbird, both arguably named after cars.
Mozilla named the web program Phoenix for rebirth. A company objected. Mozilla renamed it Firebird because phoenix was a fire bird. They named the mail program Thunderbird for similarity of Firebird.
Thanks, I forgot about Phoenix.
Between Netscape Navigator and Firefox, their web browser was called simply "Mozilla". It supported GUI themes in XML with images which were officially called "Chrome". Mozilla also hosted user-contributed themes on a web site called "Chrome Zone".
The browser was considered slow and bloated however, and when Firefox came, its lack of theme support was perceived as part of it having been de-bloated.
I might vaguely recall Mozilla being in an easter egg or alternate throbber in a Netscape browser, and my impression was that it had been an internal codename at Netscape which was then adopted for the open source project.
> (presumably Google was keen to market it to extremely old people, chrome not having been a particularly big thing in cars for a very long time).
Not wanting to admit term was taken from competing browser is perfectly fine explanation
Why is it archaic? It's part of the same metaphor as "engine", which is still widely used.
Because it's no longer widely used.
This comment and a few others here make me feel old and sad for the people too young to remember that time. Yes, Google was an enormous breath of fresh air when it came out. 1000% better UI and features than the competition. Search was incredible. Gmail was a revelation. The whole company culture was night and day compared to the stodgy old tech companies like IBM. Just mind blowingly awesome. And then maps?? How did they even do that? The tech world felt entirely fresh and new and hopeful.
They basically revolutionized the web with the JavaScript V8 engine in chrome. Before them, JavaScript performance was so bad you had to have a really light touch with it.
I miss those times. It allowed for sooo many shitty practices
We have. That's why the parent said _there was a time_, implying that this is no longer true.
Admittedly, it's a while ago. But original gmail, say, really did put a huge amount of effort into it.
Some people seem to think they cared, at some point. I’m not one of them.
If you had been a Yahoo user when Google launched, you’d understand.