The base rate argument here is the right one. I maintain a solo project with 3,800+ tests and 92% coverage — zero stars for months because I never promoted it. Stars measure marketing, not quality.
What's more interesting to me is that Claude dramatically lowers the barrier to _testing_, not just writing code. I can mass-generate edge case tests that I'd never bother writing manually. The result is higher-quality solo repos that look "abandoned" by star count.
Is anyone tracking test coverage or CI pass rates for AI-assisted repos vs traditional ones? That seems like a much more useful signal than stars.
The test amounts (and quality) in my personal projects has went WAAAY up.
A tiny utility I would never have bothered even setting up a test framework for before has about a 100 tests today. Which is really good because I tend to abandon stuff that Just Works in the background and come back to them in a year.
Having a bunch of tests makes me feel better about changing things without breaking other stuff.
And when you think you have no users, try to make a release that crashes at start and you'll get a bugreport within minutes :D