Why would Github be a guide? It's also terrible, but it's a radically different stack from an unrelated company

That, and even before AI, MS was having trouble with GH reliability

GitHub, along with MSFT in general, have massive copilot mandates where workers are being shamed into using slop tools to fix serious on-going issues. GitHub seems wholly incapable of resolving their issues: money isn't a problem, talent isn't a problem, but business leadership is definitely a major problem.

Look at how other companies are suffering massive outages due to LLMs too like AWS and Cloudflare. Two companies that use to be the best in the industry at uptime but have suddenly faltered quite quickly.

Companies that have even worse standards will quickly realize how problematic these tools are. Hopefully before a recession because this industry seems to be allergic to profitable businesses and leaders that have been around since ZIRP have shown zero intelligence in navigating these times.

None of the three major Cloudflare outages in the past six months had anything to do with LLMs. They were regular old human mistakes.

We did, however, determine that at least one of them (and perhaps all) would have been easily caught by AI code reviewers, had AI code reviewers been in use. So now we mandate that. And honestly, I love it, the AI reviewer spots all sorts of things that humans would probably miss.

(We also fixed a number of problems around configuration that would roll out globally too fast, leaving no time to notice errors and stop a bad rollout, as well as cases where services being down actually made it hard to revert the change... should be in a much better place now. But again, none of that had to do with LLMs.)

> None of the three major Cloudflare outages in the past six months had anything to do with LLMs. They were regular old human mistakes.

Is that true? At least one of them seemed to involve LLM-written code from what I saw. (Not to say that human error wasn't _also_ a contributing factor, but I wouldn't say it had _nothing_ to do with LLMs).

> We did, however, determine that at least one of them (and perhaps all) would have been easily caught by AI code reviewers, had AI code reviewers been in use. So now we mandate that. And honestly, I love it, the AI reviewer spots all sorts of things that humans would probably miss.

The reviewer is decent, but the false positive rate is substantial, and the false negative rate is definitely nonzero. Not that you would know that the way our genius CTO talks about it...

> Not that you would know that the way our genius CTO talks about it...

Honestly I find it bizarre that there are people at Cloudflare who have this attitude. Without Dane, the company wouldn't be half the size it is today.

Something unexpected that LLMs robbed from us is to receive the grace of assuming we failed on our own e.g. good ol' fashioned human/organizational failure.

[deleted]
[deleted]