
(The opinions expressed in this article are those of the author and do not necessarily reflect those of Al-Fanar Media).
In an increasingly globalised world driven by knowledge and innovation, global university rankings have become influential tools for assessing the quality and reputation of higher education institutions worldwide. However, criticisms have emerged over how credible such rankings are, the inherent biases they carry, and the consequent injustice they impose on universities in developing countries.
This essay aims to shed light on these issues and why they contribute to an unfair disadvantage for institutions in developing nations.
It can be clearly seen that global university rankings are based on indicators that favor universities from developed countries, such as research output, citation impact, international reputation, and resources. These indicators may not reflect the quality, diversity, and social impact of universities in developing countries, which may have different missions, contexts, and challenges.
Moreover, global university rankings are influenced by reputational bias, which means that the perception of quality and prestige of a university affects its placement in the ranking, regardless of its actual achievements. Reputational bias can be introduced by surveys, citation practices, journal procedures, and peer-review processes that favor well-known and established universities, often from developed countries.
Another factor that influences the outcomes of world university rankings is that these classifications are subject to a feedback loop, which means that the results of the ranking reinforce existing inequalities and biases in the higher-education system. Universities that achieve high rankings attract more resources, funding, students, and researchers, which in turn improves their position in future rankings. Universities that rank low face more difficulties in competing and improving their performance.
Lack of Credibility
Critics caution against putting too much faith in rankings’ results for several reasons, including:
Unscientific methodologies. Many university rankings employ subjective criteria and methodologies that are prone to manipulation and do not always reflect the real quality of an institution. Factors like peer review, faculty-to-student ratios, and international diversity are open to interpretation and can generate biased or inconsistent results.
Limited representation. University rankings rely heavily on performance indicators that primarily value research output and citations. As a result, they often overlook other crucial aspects of universities, such as teaching quality, community outreach, or academic partnerships, and lack a comprehensive evaluation of universities’ multifaceted contributions.
Inherent Bias
Studies have pointed out that global university rankings are not neutral tools, and that structural biases affect their results. Following are two examples:
Language and cultural bias. Many university rankings favour Anglophone universities, creating a bias against institutions from non-English speaking countries. This overlooks the excellent research and academic practices published in other languages, reducing the representation of expertise across diverse academic communities worldwide.
Funding disparities. Universities in developing countries often face financial constraints, hindering their ability to compete with well-established institutions in developed nations. Global rankings tend to favour universities with greater research grants, private endowments, or government funding, creating a structural bias against institutions that lack resources for research and development.
Injustices for Developing Countries
The emphasis placed on global university rankings has unfair consequences for universities in developing nations. These include:
Reproduction of inequalities. World university rankings reinforce existing inequalities, where prestigious universities, mostly from developed nations, dominate the top positions. This creates a cycle where elite universities attract more resources, funding, and international students, widening the gap between these powerful institutions and those in developing countries. By their limited number, rankings do not allow for more than a dozen or so universities from each developing country to be represented as a token service.
Hindrance to institutional growth. Universities in developing nations often face challenges in allocating scarce resources to research and infrastructure development. As the rankings emphasise research output, these universities find it difficult to compete, limiting their ability to offer quality education and research opportunities to their students and researchers.
Potential Solutions
While there is no one-size-fits-all answer to this question, universities in developing countries can take several steps to improve their rankings despite the bias against them inglobal classifications. These include:
Developing alternative rankings that consider the diversity, inclusivity and social impact of universities, not just their research output. For example, the Times Higher Education Impact Rankings assess universities against the United Nations’ Sustainable Development Goals (SDGs), including SDG 10: reduced inequalities. Another example is the Alternative University Appraisal (AUA), which evaluates universities based on their contribution to sustainable development and social transformation.
Supporting collaboration and partnership between universities in developed and developing countries, to share resources, expertise, and best practices. This can help foster mutual learning and capacity building, as well as increase the visibility and recognition of universities in developing countries.
Promoting policies and initiatives that encourage access and participation of students and staff from under-represented groups, such as first-generation students, students from low-income backgrounds, students with disabilities, and students from developing countries. This can help create a more diverse and inclusive academic community, as well as enhance the quality and relevance of education and research.
Focusing on their strengths and unique offerings, such as specialised programmes or research areas, and to highlight these in their marketing and branding efforts. Additionally, universities can work to diversify their staffs, increase incentives and monitor performances, take feedback from students, be forward-thinking, and draw up a long-term strategy to generate the levels of research output required for consideration for rankings.
Mohamed Al-Rubeai is an emeritus professor and Conway Fellow at University College Dublin, chairman of the Network of Iraqi Scientists Abroad (NISA), and an international adviser on higher education. His latest book, “Education Issues in Iraq: Difficulties, Challenges and Solutions”, was published in London last year.
Related Reading
Al-Fanar Media articles:
- An Arab Ranking of Arab Universities Will Benefit the Region
- Arab Educators Favour an Arab University Ranking System, 7-1, in Poll
- Arab Universities Celebrate Progress in the SCImago Rankings
Articles in other media outlets and journals:
- “Critiques Mount Around Popular Annual College Rankings”. Deidre McPhillips, CNN, September 12, 2022.
- “The Absurdity of University Rankings”. Jelena Brankovic, LSE Impact of Social Sciences blog, March 22, 2021.
- “Territorial Bias in University Rankings: A Complex Network Approach”. Loredana Bellantuono, et al. Scientific Reports, 2022.
- “Analyzing the Impact of Reputational Bias on Global University Rankings Based on Objective Research Performance Data: The Case of the Shanghai Ranking (ARWU)”. Vicente Safón and Domingo Docampo, Scientometrics, 2020.
- “University Rankings: A Closer Look for Research Leaders”. Elsevier, 2021.
- “University Rankings: How Do They Compare and What Do They Mean for Students?” Miguel Antonio Lim, The Conversation, September 28, 2018.