Bug reports also go down when people lose faith that they will be fixed, because reporting them is often a substantial time commitment. You see it happen pretty regularly as trust in a group/company collapses.

The last three times I filed detailed bug reports as a client, all I got back were AI replies asking the same questions I’d already answered in the original report and suggesting alternatives I’d explicitly said I’d already tried. No wonder people don’t write bug reports anymore.

Add this the real possibility that significant part of reports that get filed might be AI generated or rewritten. With high possibility of being misreported because of that. Or have incorrect parts... So attack on multiple sides.

And we do not get even get into potential adversarial tactics. If you have no morals what is better than using agents to flood your competitor with fake bug reports.

Just let AI filter out the fake reports! Then let AI work on the real ones. See, there's really no problem "more AI" can't solve (as long as you're willing to ignore all of the underlying ones). "Pay us to create the problems you'll have to pay us to fix for you" is one hell of a business model. It basically prints money.

Just let AI report the bugs. Problem solved!

I agree, and I'd like to point out that this problem isn't unique to AI driven projects. I think much, if not all, of what Mitchell has been observing can readily happen without AI in the mix.