Note that their paper includes p-hacking so this is proto-science rather than science. They didn't find data to match their hypothesis, but they were able to find a hypothesis to match their data.

> Follow-up secondary analyses were then conducted to examine in more granular fashion the timing of the association between nuclear testing and occurrence of transients. Table 2 summarizes the association between occurrence of transients and different time windows relative to nuclear testing, ranging from 2 days before a test until 2 days after a test. The only association that reached statistical significance was for the association in which transients occur 1 day after nuclear testing.

This specific analysis isn’t p-hacking because although they conduct multiple tests, they report all of them rather than just the statistically significant ones.

They should however account for multiple testing. The Bonferroni correction (which is conservative) would set the alpha level to 0.05/5=0.01, for which the 1 day after result is still (just) statistically significant.

Not to say there couldn’t be other problems.

[dead]

I'm glad you called this out. p-hacking can be useful to generate hypotheses, which ought to be then tested (rather than thinking the p-hacked conclusions are just that)