For researchers who notice new releases as soon as they are published and discover malice based on that alone, I agree, and every step of that can be automated to some level of effectiveness.

But for researchers who aren't sufficiently effective until the first victim starts shouting that something went sideways, the malicious actor would be wise to simply ensure no victim is aware until well after the cooldown period, implementing novel obfuscation that evades static analysis and the like.

Novel obfuscation, with a novel idea, is hard to invent. Novel obfuscation, where it is only new to that codebase, is easy(ier) to flag as suspicious.

While bad actors would be wise to ensure low-cooldown users are unaware, I would not say they can "simply" ensure that.

Code with any obfuscation that evades static analysis should become more suspicious in general. That's a win for users.

A longer window of time for outside researchers is a win for users -- unless the release fixes existing problems.

What we need is allowing the user to easily change from implicitly trusting only the publisher to incorporate third parties. Any of those can be compromised, but users would be better served when a malicious release must either (1) compromise multiple independent parties or (2) compromise the publisher with an exploit undetectable during cooldown.

Any individual user can independently do that now, but it's so incredibly time-consuming that only large organizations even attempt it.