Anthropic has a model. Microsoft doesn't.

Microsoft can use OpenAI models but it's not the model that's the problem, it's the application of them. Anthropic simply knows how to execute better.

Anthropic's Models are better though. It may not "perform" as well on the LLM task benchmarks, but its the only one that actual gives semi-intelligent responses and seems aligned with human wants. And yes, they definitely have much better execution. It's the only one I considered shelling out 20 bucks for.

GPT 5.2 Codex is often better and more thorough than Opus 4.5, it's just slower.

they should just acquire one of the many agent code harnesses. Something like opencode works just as well as claude-code and has only been around half of the time.

I used opencode happily for a while before switching to copilot cli. Been a minute , but I don't detect a major quality difference since they added Plan mode. Seems pretty solid, and first party if that matters to your org.

As evidenced by Anthropic models not performing well in github presents copilot.

I read that a few times but from my personal observations, Claude Opus 4.5 is not significantly different in GitHub Copilot. The maximum context size is smaller for sure, but I don’t think the model remembers that well when the context is huge.

Microsoft has a model nearly as old as the company.

Attempt to build a product... Fail.

Buy someone else's product/steal someone else's product... Succeed.

We love to hate on Microsoft here, but the fact is they are one of the most diversified tech companies out there. I would say they are probably the most diversified, actually. Operating systems, dev tools, business applications, cloud, consumer apps, SaaS, gaming, hardware. They are everywhere in the stack.

That's a "business" model, not a language model, which I believe is what the poster is referring to. In any case though, MS does have a number of models, most notably Phi. I don't think anyone is using them for significant work though.

It's a word play, if their LLM model sucks too much they'll get someone else's.

I mean they fought the browser war for years, then just used Chrome.

Which is kind of a bummer - it'd have helped the standards based web to have an actual powerful entity maintain a distinct implementation. Firefox is on life-support and is basically taking code from Blink wholesale, and Webkit isn't really interested in making a browser thats particularly compliant to web standards.

MS's calculus was obvious - why spend insane amounts of engineering effort to make a browser engine that nobody uses - which is too bad, because if I remember correctly they were not too far behind Chrome in either perf or compatibility for a while.

It would have helped the standards based web, if the standards based web wasn't a fermenting spaghetti monster.

From what I've heard a W3C standards meeting is basically a Zoom call between Blink and Webkit engineers.

Well, they fought hard until IE6.

Then they took their eyes off the ball - whether it was protecting the Windows fort (why create an app that has all the functionality of an OS that you give away for free - mostly on Windows, some Mac versions, but no Linux support) when people are paying for Windows OR they just diverted the IE devs to some other "hot" product, browser progress stagnated, even with XMLHttpRequest.

They do have some in-house LLM's (Phi) but they seem to either have issues with, or not thinking it's worth it, to develop large flagship ones.

One has existed since the 80s, when was the other founded?

What does it matter? And Microsoft was founded in the 70s..

I think they're implying Microsoft is having a Kodak moment

A large language model, or a business model?