Evaluating Scientists: A Not Fair Process

Onikle Inc.
3 min readNov 29, 2020
Image by Atsutaka Odaira

What is the right way for academic institutions to evaluate research and scientists? Growing interest in this question is followed by an increasing awareness of critical issues in how scientific study is practiced and published. Asking this question is mistake in first place when the research is not designed and performed appropriately; there is a deficiency in reproducibility (a basic foundation of science); and findings remain incomplete, unpublished, or published based on certain agenda.

These issues are related to the mechanisms in which scientists are judged on almost every aspect of a researcher: their appointment, advancement in career, and tenure choices. Designing, writing, presenting, assessing, ordering, and choosing the right candidate for grant recipients, faculty member, and evaluation committees is a vigorous and mostly time-consuming. In a world where time is restricted and budget is limited, organizations need to make these hard decisions swiftly. Many modern evaluation efforts mainly regard what is readily determined, such as the quantity of grants funded and the amount the scientists gained on citations from papers written.

However, and for clearly observable facets of the success of a scientist, the parameters used for evaluation and options differ across organizations and are not always uniformly applied, even within the same organization. In comparison, many institutions use metrics that are widely understood to be troublesome. For instance, a significant amount of literature documents finds issues with the Journal Impact Factor (JIF) for citation impact assessment. It is more than just a misconception circulated among science students that faculty recruiting and promotion at top institutions needs papers published in journals with the highest JIF (Nature, Scientific, Cell, etc.). The JIF continues to be a benchmark used by most institutions to measure faculty or also assess monetary incentives. With so much trust on JIF’s metric, you would think it is reliable. In reality, it does not make sense. Just think, how does 10%-20% of the articles written in a journal are accountable for 80%-90% of the impact factor of a journal?

More significantly, it lacks measurement of other dimensions of research impact such as quality of analysis from an automated index. For example, in our current evaluation culture, researchers’ who create data and make it accessible to it by data sharing or education is neglected. Few evaluations of scientist rely about the use of good or poor scientific practices. Importantly, neither do metrics currently used say anything about whether scientists contribute to society, the primary objective of most science. The reproducibility of research by others in applied sciences and life sciences is only now beginning to be examined. Much of the results suggest serious issues. A former Harvard University dean of medicine, Jeffrey Flier, has suggested that when analyzing the success of scientists, reproducibility should be a factor.

To put it mildly, the current method of evaluating researchers has put the path of science to darkness. Researchers are working hard for something that has proved to be an ineffective measure which was born out from oversimplification of complex process.

If you are interested in our service, please register your email address in the following link to get an early access and test our All-new preprint platform that provides stress-free search experience with AI engine.

Written by Wanonno Iqtyider

--

--

Onikle Inc.

Parent compant of NapAnt. NapAnt is a service that helps improve development efficiency, man-hour estimation accuracy, and unleashes the potential of your team.