Drifts and Pernicious Effects of the Quantitative Evaluation of Research: The Misuse of Bibliometrics
Abstract
The focus of this article is on the mechanisms behind the bibliometric indicators (principally journal impact factors and researchers’ h-indexes) used to quantify the creative productivity and academic influence of scientists and scientific periodicals. The increasing use of these indicators in the academic world is associated with the increasing dominance of the managerial approach to higher education and science based on the neoliberal logic of efficiency and cost optimization. A critical analysis of the algorithms that calculate these indicators demonstrates a number of asymmetries and false logical assumptions that have a negative impact on the dynamics of scientific research, especially in the field of social sciences and humanities. For example, the well-known ratings lag between journals in the hard sciences, social sciences and humanities results from the way “impact factors” (the arithmetic mean of the number of mentions of articles of a particular journal over a two-year period) are calculated without considering the differing tempos at which these disciplines evolve. Considering the “h-index” and the basis of its calculation, a no less fundamental flaw of axiology can be found in the very preconception that a quantitative indicator speaks to the real value of a scientist’s work for their particular field of research. Moreover, the possibility of manipulation, sewn into the very nature of bibliometric indicators, has generated a significant number of tricks and schemes that allow for specific researchers and institutions to increase their ratings. It has likewise been used by university and academic administrations as a tool to formally legalize often discriminatory decisions in the allocation of resources.
About the Author
Yves GingrasCanada
Department of history
Montréal, Canada
References
1. Archambault É., Larivière V. History of Journal Impact Factor: Contingencies and Consequences. Scientometrics, 2009, vol. 79, no. 3, pp. 639–653.
2. Bhattacharjee Y. Saudi Universities Offer Cash in Exchange for Academic Prestige. Science, 2011, vol. 334, no. 6061, pp. 1344–1345.
3. Bourdieu P. Sur la télévision, Paris, Raisons d’Agir, 1996.
4. Chiapello È., Padis M.-O. Écoles de Commerce: La Pression de L’Internationalisation. Esprit, 2012, pp. 18–25.
5. Creagh S. Journal Rankings Ditched: The Experts Respond. The Conversation, 01.06.2011. URL: http://theconversation.com/journal-rankings-ditched-theexperts-respond-1598.
6. Gallois N. Les Conséquences Des Nouveaux Critères D’évaluation Des Chercheurs en Science Économique. L’Économie politique, 2013, vol. 59, pp. 98–112.
7. Gingras Y. How to Boost Your University up the Rankings. University World News, 2014, no. 329. URL: http://universityworldnews.com/article.php?story=20140715142345754.
8. Gingras Y. Le classement de Shanghai n’est pas scientifique. La Recherche, no. 430, pp. 46–50. URL: http://ost.uqam.ca/Portals/0/docs/articles/2009/Evaluation.Recherche.pdf.
9. Gingras Y. Les dérives de l’évaluation de la recherche. Du bon usage de la bibliométrie, Paris, Éditions Raisons d’Agir, 2014.
10. Gingras Y., Mosbah-Natanson S. La question de la traduction en sciences sociales: les revues françaises entre visibilité internationale et ancrage national. Archives européennes de sociologie, 2010, vol. 51, pp. 305–321.
11. Glänzel W., Moed H. F. Journal Impact Measures in Bibliometric Research. Scientometrics, 2002, vol. 53, pp. 171–193.
12. Hirsch J. E. An Index to Quantify an Individual’s Scientific Research Output. Proceedings of the National Academy of Sciences, 2005, vol. 102, no. 46, pp. 16569–16572.
13. Journals Under Threat: A Joint Response From History of Science, Technology and Medicine Editors. Medical History, 2009, vol. 53, no. 1, pp. 1–4.
14. Mercure S., Bertrand F., Archambault É., Gingras Y. Socioéconomiques de la recherche financée par le gouvernement du Québec, via les Fonds subventionnaires québécois. Études de cas. Rapport présenté au ministère du Développement économique, de l’Innovation et de l’Exportation du Québec. 2007.
15. Pontille D., Torny D. Rendre publique l’évaluation des SHS: les controverses sur les listes de revues de l’AERES. Quaderni, 2012, no. 77, pp. 11–24.
16. Rovner S. L. New Types of Journal Metrics Grow More Influential in the Scientific Community. Chemical & Engineering News Archive, 2008, vol. 86, no. 21, pp. 39–42. URL: http://pubs.acs.org/cen/science/86/8621sci1.html.
17. Saada A. L’évaluation et le classement des revues de sciences humaines par l’Agence de l’évaluation de la recherche et de l’enseignement supérieur (AERES). Connexions, 2010, no. 93, pp. 199–204.
18. Seglen P. O. Why the Impact Factor of Journals Should Not Be Used for Evaluating Research. British Medical Journal, 1997, no. 314, pp. 498–502.
19. Simpson W., Emery J. C. H. Canadian Economics in Decline: Implications for Canada’s Economics Journals. Canadian Public Policy, 2012, vol. 38, pp. 445–470.
20. Van Leeuwen T. N. Testing the Validity of the Hirsch-Index for Research Assessment Purposes. Research Evaluation, 2008, vol. 17, pp. 157–160.
21. Vanclay J. K. An Evaluation of the Australian Research Council’s Journal Ranking. Journal of Informetrics, 2011, no. 5, pp. 265–274.
22. Waltman L., Van Eck N. J. The Inconsistency of the H-Index // Journal of the American Society for Information Science and Technology. February 2012. Vol. 63. № 2. P. 406–415.
23. Zahed A. Saudi University Policy: King Abdulaziz Response. Science, 2012, vol. 335, no. 6072, pp. 1040–1042.
Review
For citations:
Gingras Y. Drifts and Pernicious Effects of the Quantitative Evaluation of Research: The Misuse of Bibliometrics. Versus. 2022;2(2):6-23. (In Russ.)