It passed all the tests.

If you can't trust your test suite to catch an automatic language translation you shouldn't trust it at all. :)

Tests can only prove the presence of bugs, but not their absence. If the AI can access the tests, it can easily make them pass by just adding additional if statements. It doesn't mean the code is actually correct.

What if we only trusted the test suite a reasonable amount, instead of pretending trust must either be blindly total or nonexistent?

It also modified many of the tests to make them pass in mischievous ways. You can't trust a test suite to catch regressions if the new version doesn't use the same test suite.

Do you have some examples?

Ah, I just learnt that you don't. Jarred's comment saying exactly that: https://news.ycombinator.com/item?id=48133806

I'll actually concede that, on a slower skim, some changes to the test suite and fixtures that first seemed suspicious to me indeed align with what those tests were doing previously, and I wish I could retract that comment.

I still think it's not such an impressive test suite as it's being claimed; which, if this actually works out, should say more about Claude's skill than the people driving it.

Gotcha. I'm genuinely curious: by "impressive", are you referring to coverage? I'd be grateful if you could say a few words about it could be more impressive (e.g, if you indeed meant to talk about coverage, say what functionality/edge cases aren't covered as of now)

Our programming languages are bad at specification and verification, so the next best thing is property-testing for modeling (e.g. Hypothesis for Python) or, for the reference implementations, extensive "expect"/snapshot test cases (e.g. Cram).

Instead, I found the bog standard suite with a single case per regression and very few actual modeling, although I wasn't expecting more. (I don't care much for JS, let alone Bun, so I can't point to features I'd like to see better tested, but I'm sure the issue tracker can do that job already.)

To be fair, our whole industry is really bad at this; most test suites are verification theatre, but now that machines can fill out implementations on their own, we should strive to properly model our requirements and limits so they can one shot what we intended. Otherwise we're left in an awkward middle in which we don't add much value over the AI fumbling around.

Thank you!

I think demonstrating broken behavior in the new build would be interesting if you have a non passing test from the original suite

The entire underlying system has been replaced. The test suite is written around the current fuzzy edges and past problem areas, not every single behavior of the existing platform.

"If you can't trust your test suite to catch a hardware floating point arithmetic bug, you shouldn't trust it at all."

"If you can't trust your test suite to catch a JVM bug, you shouldn't trust it at all."

"If you can't trust your test suite to catch a recurring memory error, you shouldn't trust it at all."

A wise teacher once told me a good programmer looks both ways when crossing a one way street.