> How do you know what testing procedures they use?

We don’t, but we can see the end result, so we know whatever they do isn’t adequate and it suggests they value shipping fast over quality or even listening to customer feedback.

> Do you honestly think they're running some kind of Ralph loop without any testing and just ship whatever looks the coolest? Really ?

No, but given how sharply the quality has been dropping over the past few months and how it suspiciously coincided with the time they admitted that Claude code is now 100% vibe coded, it certainly doesn’t feel too far off.

I’ve personally found the code that the AI writes, even this week (ie not some old models from months ago) to be shockingly shoddy. I’ve rewritten some AI code (created via spec driven development and a workflow that includes planning and refactoring) by hand and I’ve been very conscious of the amount of micro-design-changes I as a human make where the AI just blows forward shoehorning a solution into the design. My implementation happens b has adjusted and shifted many times to insure clear and performant logic, while the AI commits to an approach early and applied whatever brute force is necessary to make it work. I’ve also asked it to write various tests for me or to make isolated changes and quite frankly the code was just not very good. Working, but convoluted. Even with guidance and iteration, it’s still not on a human level.

So it’s not hard to see that if you have an application as large and complex as Claude code and you let the AI do it all, that it’s going to be a mess.

I’m not against using AI for development, but you have to be realistic about its capabilities. I feel like this is where they “got high on their own supply” and are blinded to the AI’s shortcomings and failures.