INDICATORS OF THE SCIENTIFIC IMPACT OF NATIONS REVISITED.

AuthorAllik, Jiiri
  1. Indicators of the scientific impact of nations revisited

    Two influential papers, published in Nature and Science, have popularized the idea that, just like economic wealth, the scientific impact of nations can be measured by a simple indicator, which counts how many times papers from a country have been cited on average (King 2004, May 1997). These two prominent papers demonstrated that articles published by researchers from wealthy nations are more frequently cited than papers that were published by researchers from less economically advanced countries, thus, supporting the popular view that money can buy scientific excellence. Yet, this conclusion was based on a limited sample of countries: May (1997) analysed only 15 and King (2004) 31 predominantly Western, Educated, Industrial, Rich, and Democratic (WEIRD) countries (cf. Henrich, Heine, and Norenzayan 2010a 2010b), which form a relatively small fraction of economically well developed nations. However, we know very little about whether and to what extent May's (1997) and King's (2004) findings can be generalized to other countries. Another reason for caution is that both studies looked at the relationship between economic and scientific wealth in isolation from other important societal factors which may influence the observed relationship. Although a typical bibliometric analysis prefers to focus on variables that are related to articles, authors, references, and citations (e.g. Xie et al. 2019), there is convincing evidence from many studies that both the quantity and quality of countries' research output are significantly influenced, not just by economic, but also social and cultural factors (Harzing and Giroud 2014, Leydesdorff and Wagner 2009, Mueller 2016, Schofer 2004, Tahamtan, Afshar, and Ahamdzadeh 2016). The Worldwide Governance Indicators (WGI), for instance, which was developed by the World Bank to characterize practices and institutions through which authority is exercised in a country (Kaufmann, Kraay, and Mastruzzi 2010), has been found to be an influential factor in driving scientific excellence (Allik, Lauk, and Realo 2020, Gantman 2012).

    One of the most prominent bibliometric trends is a shift from impact scores based on average values of citations toward indicators reflecting the top of the citation distribution, such as the number of papers reaching the highest rank of citations (Albarran, Ortufno, and Ruiz-Castillo 2011, Bornmann 2014, van Leeuwen, Visser, Moed, Nederhof, and van Raan 2003). In accordance with this general development, Allik and colleagues (Allik 2013, Allik et al. 2020) proposed the High Quality Science Index (HQSI), which combines the mean citation rate per paper with the percentage of papers that has reached the top 1% level of citations in a given research area and an age cohort of published papers. Interestingly, they discovered that significant correlations between the HQSI and economic indicators--Gross National Income (GNI) and expenditure on research and development (GERD)--became insignificant when the indicator of good governance (i.e. WGI) was taken into account (Allik et al. 2020). Good governance, to explain very briefly, is when authority is transparently and responsibly exercised, government has the capacity to effectively formulate and implement sound policies, and when citizens are respected and social institutions are accountable to people, not to any one privileged group (Kaufmann et al. 2010). As shown by Allik and colleagues (2020), such well-governed countries, especially if they are relatively small and have no communist past, seem to be more efficient at translating economic wealth into high-quality science.

    Although the mean citation rate appears to be a sufficiently reliable indicator of a nation's scientific impact (cf. Cole and Phelan 1999, King 2004, May 1997, Prathap 2017), sometimes rankings of nations based on citation rates alone may appear confusing. For example, very few experts would predict that it is researchers from Panama who publish papers that have the highest citation rate in the world (Allik et al. 2020, Erfanmanesh, Tahira, and Abrizah 2017, Monge-Najera and Ho 2015). Likewise, it was a rather unexpected finding to see Peru, Estonia, and the Republic of Georgia among the world's most scientifically advanced nations (Allik et al. 2020), while scientific super-powers such as the United States, Germany, and Japan had relatively modest scores on the HQSI, which were not in proportion to their gigantic spending on research and development. These anomalies seem to suggest that there may be some methodological problems in how the scientific impact of nations is measured by the HQSI (Allik 2013, Allik et al. 2020).

    One likely reason for the counterintuitive ranking of nations on the HQSI is its reliance on the number of highly cited or top articles, which may not adequately represent the whole range of papers produced by the researchers of each country (cf. Allik et al. 2020). When the Essential Science Indicators (ESI, Clarivate Analytics) database was created, all scientific output (except for the field of humanities) was divided into 22 research areas with very different publication and citation rates. (In principle, the ESI is an analytical tool that helps to identify top-performing research in the Web of Science (WoS) Core Collection.) This division, however, created a situation where it may be more advantageous for a country to avoid its papers being included in the ESI in certain research areas that are not so well developed and, therefore, could possibly decrease the country's mean citation rate. In other words, countries can achieve an overall higher citation rate per paper if they fail to collect the minimally required number of citations to pass ESI thresholds in those areas in which they are not competitive enough (cf. Allik et al. 2020). One modus to achieve this, for instance, is to publish papers in low-impact journals which have no or very little chance of being indexed in elite databases such as Scopus (Elsevier) or WoS (Clarivate Analytics) and, as a result, to qualify for the ESI. Thus, a prominent position in a nation's ranking on the HQSI can be achieved not only by a high citation rate of papers in most or all 22 research areas but also by a relatively high citation rate in very few research areas which pass the ESI threshold.

  2. The aim of the present study

    The main aim of the study is to improve the HQSI (Allik 2013, Lauk and Allik 2018) by taking into account the number of research areas in which each country has succeeded in collecting the minimally required number of citations to pass the ESI threshold. Failure to reach the required number of citations in a certain area may indicate that the number of published papers in that area and/or their impact was not sufficient to enter the ESI database and, as a result, could have reduced the country's mean citation rate if these excluded papers had been considered. In other words, the main idea of this study is to supplement citation indicators with a count of the number of research areas in which a country has exceeded the database entrance threshold. Every failure to reach the ESI was penalized because papers that remained below the entrance threshold would have degraded citation indicators. In order to distinguish this new revised indicator from the previous HQSI, we would like to name it the Indicator of a Nation S Scientific Impact (INSI). In this paper, we will demonstrate that the new indicator is a more accurate estimator of the scientific merits and societal factors that are involved in determining the scientific output and impact of nations.

  3. Method

    Data were retrieved from the latest available release of the Essential Science Indicators (ESI, Clarivate Analytics, updated on March 14, 2019, https://clarivate.com/products/essential-science-indicators/) that was available at the time of writing this paper and which covered an 11-year long period from 1 January 2008 to 31 December 2018 (see also Allik 2013, Allik et al. 2020).

    In order to be included in the ESI, journals, papers, institutions, and authors need to exceed the minimum number of citations obtained by ranking journals, researchers, and papers in a respective research field in descending order by citation count and then selecting the top fraction or percentage of papers. For authors and institutions, the threshold is set as the top 1%, and the top 50% is established for countries and journals, in an 11-year period. The main purpose of the division into separate fields is to balance publication and citation frequencies in different research areas.

    Among the 153 countries/territories that passed the ESI threshold in at least one research field were several that published only a small number of papers. For example, researchers from Dominica, Vatican, Bermuda, and Seychelles published less than 300 papers during the last 11 years. In our analyses, we only included countries that published more than 4,000 papers during the 11-year period. Although somewhat arbitrary, this number was chosen based on our previous studies (cf. Allik 2003, 2008, 2013, 2015, Lauk and Allik 2018). Applying this criterion, 53 countries or territories (36.6%) were left out of further analysis. There were six countries in which scientists published over 3,000 (but less than 4,000) papers, namely Zambia, Burkina Faso, Uzbekistan, Sudan, Macedonia, and Zimbabwe; including these did not alter the results...

To continue reading

Request your trial