Are "publication metrics" also used heavily in China by the bureaucracy ?

I know for a fact that the number of fake-journals exploded once the Govt. of India decided to use this for promotions.

It's a bit sad really: in the classical world both these countries spent inordinate amount of time on the questions of epistemology (India esp.). Now reduced to mimicking some silly thing that vaguely tracks knowledge-production even in the best case in the West.

Yes. And filtering out publications in "paper mills" and then judging the guy properly doesn't scale beyond the top few institutions. So you'll find a sudden drop-off in research quality once you reach the n_th university. It really is almost like a threshold.

What a wonderful illustration of Goodhart's Law.

Things like citation brokers (paid to cite papers), abuse of power, paper mills, and blackmail (pg. 10) is appalling to me. I have to question how we ended up here. Academia seems very focused on results and output, and this is used as a metric to measure a researcher's worth or value.

Has this always been an issue in academia, or is this an increasing or new phenomenon? It seems as if there is a widespread need to take shortcuts and boost your h-index. Is there a better way to determine the impact of research and to encourage researchers to not feel so pressed to output and boost their citations? Why is it like this today?

Academic mathematics, from what I've seen, seems incredibly competitive and stressful (to be fair, so does competition math from a young age), perhaps because the only career for many mathematicians (outside a topics with applications such as but not limited to number theory, probability, and combinatorics) is academia. Does this play into what this article talks about?

In my time in academia (~20 years) I have seen the demands and competition increase quite significantly, however talking to older researchers the this really started in the 90s the demands to demonstrate measurable outcomes increased dramatically and funding moved to be primarily through competitive grants (compared to significant base funding for researchers previously). The issue is that while previously it was common for academics to have funding for 1-2 PhD students to look into new research areas, now many researchers are required to bring in competitive grants for even covering part of their salary.

What that means is that researchers become much more risk averse, and and stay in their research area even if they believe it is not the most interesting/imapactfull. You just can't afford to not publish for several years, to e.g. investigate a novel research direction, because without the publications it becomes much much harder to secure funding in the future.

So its economic pressure again, i assume put on academic institutions that in turn pass it through as lower funding/wages.

Its important to note that somehow we see the erosion of families, infractures and institutions everywhere but we never talk about the giant f'ing elephant in the room.

This is interesting. Is there a reason why this started happening in the 90s?

I think a lot of it is covered under "New Public Management" [0], which was maybe a result of the financialization happening in the 80's [1].

And I completely GP, having been in or in contact with academic research since the late 90's, there has been a very strong shift from a culture where the faculty had means for independent research, and were trusted to find their own direction, to the system we have today where a research project has much tighter overlook and reporting than most corporate projects.

A professor with a 4-5 person group will typically need two staggered pipelines of 4-5year funding projects to run risk free. In the EU it is virtually impossible to get funding for projects that do not involve multiple countries, so you need to set up and nurture partnerships for each project. Coordination the application process for these consortia is a major hassle and often outsourced at a rate of 50kEUR + win bonus. And you of course need to run multiple applications to make sure to get anything. When I talked to mentors about joining academia around 2010, the most common response was "don't".

[0]: https://en.wikipedia.org/wiki/New_public_management [1]: https://en.wikipedia.org/wiki/Financialization

The postwar growth in tertiary education came to an end. Due to demographic shifts the number of students matriculating stopped increasing. As long as the sector was generally growing there were a decent number of opportunities for those who really wanted a career in academia. But when the growth stopped it became a zero-sum game, intensifying competition between colleagues. Win at any cost.

Because the supply of academics have outpaced demand.

I’d say that it is precisely because of superficial demand by bureaucracy that academic output has become superficial.

The demand for novel knowledge is always high. It is the supply that is short.

That’s why we hang around on HN hoping for something novel of true interest. You get a good find every once in a long while.

education funding cuts

How does that square with the cost of education significantly outpacing inflation?

Funding cuts. Like from the state and federal levels. This resulted in costs to students increasing while also forcing researchers to generate income. This is the natural result of treating college like a business. They raise costs on every side.

Highly specialised education doesn't scale well. Computers and factories keep getting more efficient, but a professor can only handle so many students still.

https://en.wikipedia.org/wiki/Baumol_effect

The money doesn't go to the researchers.

Unfunded pension and health care

The issue in all fields became significantly worse as developing countries decided their universities needed to become world class and demanded more international publications for promotion. Look at the universities in the table in the paper and you can see which countries are clearly gaming the system. If your local bureaucrats can’t tell which journals are good and which are fake, the fake journals become the most efficient strategy. Even worse, publishers figured out that if you can attract a few high-citation papers, your impact factor will go way up (it’s an arithmetic mean) and your fake journal becomes “high quality” according to the published citation metrics!

Math is particularly susceptible to this because there are few legitimate publications and citation counts are low. If you are a medical researcher you can publish fake medical papers but more easily become “high impact” on leaderboards (scaled by subject) by adding math topics to your subjects/keywords.

> Has this always been an issue in academia, or is this an increasing or new phenomenon?

The introduction of this article [1] gives an insight on the metric used in the Middle Ages. Essentially, to keep his position in a university, a researcher could win public debates by solving problems nobody else could solve. This led researchers to keep their work secret. Some researchers even got angry about having their work published, even with proper credit.

[1]: https://www.jstor.org/stable/27956338

> Is there a better way to determine the impact of research and to encourage researchers to not feel so pressed to output and boost their citations? Why is it like this today?

It's hard, specially if you have to compare people of different areas (like algebra vs calculus) that have different threshold for what is a paper worthily result and each community has a different size and different length of review time.

Solution 1) Just count the papers! Each one is 1 point. You can finish before lunch.

Solution 2) Add some metrics like citations (that favor big areas and areas that like to add many citations). Add impact index (that has the same problem). How do you count self citations and citation rings?

Solution 3) Cherry pick some good journals, but ensure the classification committee is not just making a list of the journals they publish in. Filter the citations, or add some weight according to the classification.

Solution 4) Give the chair of the department a golden crown and pretend s/he is the queen/king and can do whatever they like. It may work, but there are BDFL and nepotist idiots. Now try scaling it for a country.

Solution 5) RTFA. Nah. It's too hard. Assume you have 5 candidates and they have 5 papers in the last 5 years (or some other arbitrary threshold). You need like two weeks to read a paper, more if it's not in you area, perhaps you can skim it in 1 or 2 days, but it's not easy to have an accurate understanding of how interesting is the result and how much impact it has in the community. (How do you evaluate if it's a interesting new result, or just a hard stupid calculation?) You can distribute the process of reading the papers, but now you have the problem of merging the opinion of different people. (Are your 3/5 stars the same that my 3/5 stars?)

I've seen similar stuff in a couple of other places, including IT back in the 1990s (back when it wasn't nearly as glamorous as it is today).

I think some of this has to do with... resentment? You're this incredibly smart person, you worked really hard, and no one values you. No one wants to pay you big bucks, no one outside a tiny group knows your name even if you make important contributions to the field. Meanwhile, all the dumb people are getting ahead. It's easy to get depressed, and equally easy to decide that if life is unfair, it's OK to cheat to win.

Add to this the academic culture where, frankly, there are fewer incentives to address misbehavior and where many jobs are for life... and the nature of the field, which makes cheating is easy (as outlined in the article)... and you have an explosive mix.

I think some of this has to do with... resentment? You're this incredibly smart person, you worked really hard, and no one values you. No one wants to pay you big bucks, no one outside a tiny group knows your name even if you make important contributions to the field. Meanwhile, all the dumb people are getting ahead.

Part of it, too, is that, while no one goes into academia to get rich, people quickly find out that the academic world runs on money. If you don’t get grants, you die, even with tenure. So what’s the point?

The reality of academia is so dismal that most people, by 30, wish they had sold out and chased money like the dumb-dumbs, who are, as you correctly note, farther ahead.

Abuse of power is definitely not new. Professors have historically overworked their grad students and withheld support for their progress towards a PhD or a paper unless they get something out of it. For women it’s extra bad because they can use their power in other ways.

I love the table of tortured phrases [0], which shows hilarious examples of synonyms of established scientific phrases, machine-generated by fraudulent authors to stay below the radar of plagiarism detectors.

My favorites from that table:

- “fuzzy logic” becomes “fluffy rationale”

- “spectral analysis” becomes “phantom examinations”

- “big data” becomes “enormous information”

[0]: https://arxiv.org/pdf/2509.07257#table.3

There are loads more in [0], including 'brilliant agreement' for 'smart contract', 'bosom malignancy' for 'breast cancer', and many, many others.

[0] https://dbrech.irit.fr/pls/apex/f?p=9999:24

Bibliometrics in science is just an unworkable approach in general, and IMO it causes more harms than not. Research is one of the least suitable human activities that you can possibly try to quantify, yet the entire scientific establishment runs on these metrics by now. I more or less believe that this strategy hinders scientific progress, as it pushes researchers into more and more risk-averse behaviors.

Sabine Hossenfelder has been on about this topic in the field of physics publishing for quite some time now.

It really is a terrible thing, though I can understand how some researchers feel trapped in a system that gives them little if any alternative if they wish to be employed the next year. Not just one thing needs to be changed to fix it.

Citation based metrics are much more prevalent in physics than in math (at least in the US and most countries in Europe). When compared with physics, my impression is that mathematics has the tradition "slow, long term" over "rapid, incremental." Of course, it's not perfect.

>When compared with physics, my impression is that mathematics has the tradition "slow, long term" over "rapid, incremental."

Not anymore :(

This article does not seem to quite convey the experience of a pure mathematician. Yes, citation fraud is happening on an apalling scale, but no it is not a serious issue for mathematicians.

The problem of AI generated papers is much more serious, although not happening on the same scale (yet!).

TLDR: The publication culture of mathematics (with relatively few papers per researcher, few authors per paper, and few citations per paper) makes abuse of bibliometrics easier. The evidence suggests widespread abuse.

My take: I’ve published in well-regarded mathematical journals and the culture is definitely hard to explain to people outside of math. For example, it took more than two years to get my key graduate paper published in Foundations of Computational Mathematics, a highly regarded journal. The paper currently has over 100 citations, which (last I checked) is a couple times higher than the average citation count for the journal. In short, it’s a great, impactful work for a graduate student. But in a field like cell biology, this would be considered a pretty weak showing.

Given the long timelines and low citation counts, it’s not surprising that it’s so easy to manipulate the numbers. It is kinda ironic that mathematicians have this issue with numbers though.

Pure math has a far greater vulnerability to this than applied math. Top journals have impact factors of around 5.0. Respectable but tiny specialist journals can have impact factors less than 1.0 (like, 0.4). Meanwhile, MDPI Mathematics is a Q1 journal with an impact factor over 2.0.

The now-standard bibliometrics were not designed by statisticians :-)

The key is that mathematicians in the US and most parts of Europe do not count citations. So this is not really an issue.

It is an issue if a mathematician has to apply for grants. Often they are in the same competition as physicists, for instance, and then metrics do matter.

Publishing math is one of the most time consuming things ever, between the submission, review/revising, and editing. I with there was a faster way of doing it outside of arXiv. Without having to review the paper closely, typically an experienced editor can tell at fist glace if it's correct or sound.

It is what we could call the “zone of occasional poor practice”. Included are actions like

I think this is more common in computer science papers. I see this all the time, where 5- 10 authors will collaborate on a short paper, then collaborate on each other's papers in such a way that the effort is minimized and publishing count and citation count is maximized. .

Easy to see how social sciences can be games. Much sadder to see Mathematics get gamed too. It provides ammo to folks looking to defund the topics.

Mathematics did invent game theory, so in that way it simply takes more math to do math which isn't good

Maybe the way forward is to break the impact factor game. Everybody in a field get together and publish a paper: literally every topologist could put their name on the paper “Generally we all agree that Topology is an interesting topic.”

Everybody in the field cite that paper going forward, giving it a massive impact factor, making impact factor useless. Do this occasionally, randomly, do it in niche sub fields, everybody who goes to a conference put their name on the paper “we had a nice time at <conference> this year.”

I mean, it is something everybody hates, right? There’s no point in preserving it.

True!

When I took business 101 in college one of the first things they taught us is that long term, fixed metrics will always become gamified, that both the ones measuring and the ones being measured will replace the real results with the metrics and sacrifice the first for the second. I understand that this is common knowledge in the administrative world. Yet, every single performance metric always becomes ossified as the only target that matters, every time. Why?

At the level of industries and large groups, the chief answer to your "Why?" is the same sort of reasoning as the old "Nobody ever got fired for buying IBM": Nobody ever got fired for using established performance metrics.

On the individual level, there's another tricky problem, which is that very few individuals could figure out an alternative performance metric that beats the established one, no matter how gamified the established one is.

Ironic that mathematics suffers due to an overemphasis on numbers.