> Is there a better way to determine the impact of research and to encourage researchers to not feel so pressed to output and boost their citations? Why is it like this today?
It's hard, specially if you have to compare people of different areas (like algebra vs calculus) that have different threshold for what is a paper worthily result and each community has a different size and different length of review time.
Solution 1) Just count the papers! Each one is 1 point. You can finish before lunch.
Solution 2) Add some metrics like citations (that favor big areas and areas that like to add many citations). Add impact index (that has the same problem). How do you count self citations and citation rings?
Solution 3) Cherry pick some good journals, but ensure the classification committee is not just making a list of the journals they publish in. Filter the citations, or add some weight according to the classification.
Solution 4) Give the chair of the department a golden crown and pretend s/he is the queen/king and can do whatever they like. It may work, but there are BDFL and nepotist idiots. Now try scaling it for a country.
Solution 5) RTFA. Nah. It's too hard. Assume you have 5 candidates and they have 5 papers in the last 5 years (or some other arbitrary threshold). You need like two weeks to read a paper, more if it's not in you area, perhaps you can skim it in 1 or 2 days, but it's not easy to have an accurate understanding of how interesting is the result and how much impact it has in the community. (How do you evaluate if it's a interesting new result, or just a hard stupid calculation?) You can distribute the process of reading the papers, but now you have the problem of merging the opinion of different people. (Are your 3/5 stars the same that my 3/5 stars?)