I'm a statistician. My wife does basic (biological) science. Almost every time she asks my advice on an experiment I want to tell her to 10x the sample size. But the academic community has certain ideas about how big sample sizes should be, and trying to use radically larger samples runs into all sorts of barriers ranging from ethics concerns (for animal experiments) to funding.

At the end of the day there's only so much you can learn from a sample size of 12. I'm not sure it's more ethical to have a bunch of wasted experiments with 12 mice each where you don't learn anything than to use 100 mice and actually have statistical power to identify something other than the hugest effect sizes.

Lack of appropriate funding leads to cutting corners to the point that some results may not be worth the price of the paper to describe them. I had a passing experience with epigenetics. Even experiments with basically free of ethics issues cell lines could be screwed up by using single end, too short sequencing reads. Combined with too low coverage, less than perfect controls it gives the input data I which the state of the art peak callers will just throw the towel. So the "trick" is to use some way more forgiving peak caller and get a rather crappy results. Using the outdated human genome assembly (hg19), and old genome mapping programs just puts an extra cherry on the cake...