TLDR: The publication culture of mathematics (with relatively few papers per researcher, few authors per paper, and few citations per paper) makes abuse of bibliometrics easier. The evidence suggests widespread abuse.

My take: I’ve published in well-regarded mathematical journals and the culture is definitely hard to explain to people outside of math. For example, it took more than two years to get my key graduate paper published in Foundations of Computational Mathematics, a highly regarded journal. The paper currently has over 100 citations, which (last I checked) is a couple times higher than the average citation count for the journal. In short, it’s a great, impactful work for a graduate student. But in a field like cell biology, this would be considered a pretty weak showing.

Given the long timelines and low citation counts, it’s not surprising that it’s so easy to manipulate the numbers. It is kinda ironic that mathematicians have this issue with numbers though.

Pure math has a far greater vulnerability to this than applied math. Top journals have impact factors of around 5.0. Respectable but tiny specialist journals can have impact factors less than 1.0 (like, 0.4). Meanwhile, MDPI Mathematics is a Q1 journal with an impact factor over 2.0.

The now-standard bibliometrics were not designed by statisticians :-)

The key is that mathematicians in the US and most parts of Europe do not count citations. So this is not really an issue.

It is an issue if a mathematician has to apply for grants. Often they are in the same competition as physicists, for instance, and then metrics do matter.