Since 2012, nearly 12,800 individuals and 872 scientific organisations have signed the San Francisco Declaration on Research Assessment (DORA). This organisation wishes to prevent the tradition of equating the journal impact factor with the merits of the contributions of scientists. DORA states, “this practice creates biases and inaccuracies when appraising scientific research.” It also states that the impact factor is not to be used as a substitute “measure of the quality of individual research articles, or in hiring, promotion, or funding decisions.” The Journal Impact Factor is a basic metric, initially developed to help librarians determine for their libraries what journals to purchase in an era where there increasing number of journals. The creator of Journal Impact Factor, himself mentioned that the metric should not be used for judging researchers. This blog will look into why using Journal Impact Fact to evaluate is a bad idea.
It is a clear ecological fallacy to use Impact Factors to judge or show the quality of individual papers or the authors of those papers. Equal distributions are very rare in bibliometrics and thus mean values such as Impact Factor are weak descriptors. For citation distributions of articles published in a journal, this certainly applies. With a limited number of very highly cited articles, they are often very distorted. Probably even more distortion can be seen in the citation distributions of journals with high Impact Factor.
A person or organisation simply cannot evaluate a paper is better because it has been published in a journal with high Impact Factor. The producers of Impact Factors and various bibliometricians have been alerting for decades against using journal averages to judge single papers or authors. In the Journal Citation Reports it reads “You should not depend solely on citation data in your journal evaluations. Citation data are not meant to replace informed peer review. Careful attention should be paid to the many conditions that can influence citation rates such as language, journal history and format, publication schedule, and subject specialty.” In fact, assessing papers by their Impact Factor evaluates the ability of papers and authors to be approved by editors and to pass peer review. We have mentioned how peer review is flawed system. Impact Factor is not about the paper’s content or relevance and it is merely a minor justification for tenure, advancement, grants, etc. decisions.
The use of impact factors generates Matthew Effect and then retains it. The disproportionate attention to high Impact Factor journals means that far more attention will be received by the same paper accepted by such a journal and will even receive more citations than they would have received in other journals. The effect associated with this is that authors and evaluators consider a paper to be better just because it was published in a journal with high Impact Factor or rejection rates and may believe that citing a paper from high Impact Factor journals is the safe way to avoid negative comments. Therefore, a free ride based on the visibility and citability of the well-known journal brands in which they are published forms part of the citations these papers receive.
In order to support journal subscription decisions, Impact Factors were created. To discover related literature, follow discussions or trace fresh research topics, citation indexes are useful. Eugene Garfield, the creator of Journal Impact Factor did not even envision researchers quoting Impact Factors in their resume to get tenure, promotion and grant applications in his wildest dreams. It is a flawed system and needs to be changed for betterment of science.
If you are interested in our service, please register your email address in the following link to get an early access and test our All-new preprint platform that provides stress-free search experience with AI engine.
New Preprint Platform. Get access to an all-new preprint platform that provides a stress-free search experience with AI…
Written by Wanonno Iqtyider