Availability through Bedrock has been a major driver in use of Anthropic in my org. And I am betting there is actual margin in it as well.

I wonder if this is directly linked to the split up with Microsoft. Just from my anecdata, OpenAI is getting completely ignored in serious enterprise deployments because what they offer on Azure sucks and there is no other corporate friendly way to get it. They probably saw themselves getting destroyed in enterprise and realised it was existential to be able to compete with Anthropic on AWS.

May be very dependent on country/region, but in Germany where Azure is quite prevalent in mid-market companies (due to heavy historic reliance on Microsoft in general), OpenAI + Azure seems to have worked as a great synergy. Few customers I've worked are even trying to reach for other providers, as the OpenAI models are available for them and promoted by Azure.

It seems pretty clear that OpenAI renegotiated their agreement in preparation for this:

https://news.ycombinator.com/item?id=47921248

[deleted]

Just curious, what’s wrong with Azure?

It might be just our region, but for a long time we couldn't access current frontier models at all. Only old GPT4 level models. Meanwhile, Anthropic is rolling out access to every model within 24 hours of announcement to Bedrock.

Not sure how much Azure OAI has changed, but when I last used it 2-3 years ago, it was just a farce to get you using provisioned throughput. The throughput quotas were small, the process to request more was bureaucratic, and the Azure SAs were

It was also very clear the OAI and MS teams held each other in contempt (not relevant, but was interesting and grew in the immediate aftermath of the Altman drama).

So why were we using it? OpenAI don’t really have an enterprise go to market, bedrock still relied on Claude 2, and we weren’t willing to YOLO on clickthroughs.

Once Claude 3 came out, we jumped ship. That sucked too, although I hear it’s gotten better though.

I have been out of openai azure deployments for a whole,1year+, but we had often spikes in latency, escalated to the Head of Azure Europe, and still no official clues about them, meanwhile they were trying to get us in some kind of collaboration announcements. And it was the only reasons we had a few meetings with that guy.

So yeah Azure sucked ass and plenty of outages or latency, like 3min for first byte while usually it was max 30sec to 1min, if not even faster (memory is a bit fussy)

ur probably right on the margin. Anthropic doesn't break it out, but enterprise spend on Bedrock is the highest quality revenue in the AI stack right now. Itzs sticky, multi-year, embedded in existing AWS commits. OpenAI was watching that compound while stuck on Azure

[deleted]