How Can Educational Success Be Measured?
In Arab countries, as in the rest of the world, before success can be measured it needs to be decided what is being measured. Graduation rates? Student satisfaction? Proficiency in certain disciplines? Employment rates after graduation? Future earnings? Or perhaps less tangible factors—critical thinking and communication abilities.
Ever since 1983, when the American magazine U.S. News and World Report published a list of “America’s Best Colleges,” the idea that success in higher education could be reduced to a set of measurable factors has been attacked, often by institutions unhappy with their place in the rankings. At schools, there is also a global debate about how much pupils’ performance on standardized tests, including the tests that determine college placement, should be the measure of schools’ success.
Although the Arab world has some of the oldest universities in the world, they never score very highly in the Western rankings, whose upper reaches are dominated by the big, rich, research-focused American and British universities with a tiny sprinkling of schools from Europe and East Asia. In the Times Higher ranking, the highest-ranked Arab institutions, King Saud and King Abdulaziz, are barely in the top 400. In the QS rankings, the highest Arab school is the King Fahd University of Petroleum and Minerals at 216.
Yet as Times Higher learned in 2011, even “objective” measures can be gamed. It turned out that the surprisingly high position of Alexandria University that year (which at 147 was four places above Georgetown) was entirely due to fact that a single scholar connected with the Egyptian institution, the mathematician Mohamed El Naschie, published over 320 of his own articles in an obscure scientific journal of which he was also the editor. Because of the way Times Higher counted research impact Alexandria ranked 4th in the world in that category—ahead of both Harvard and Stanford.
Ellen Hazelkorn, author of Rankings and the Reshaping of Higher Education, warns that by identifying quality with what can be counted, rankings allow wealthier universities to throw money into recruiting faculty members who have a disproportionate impact on a university’s position. Many observers have attributed the rapid rise of the Saudi universities to precisely such a strategy.
“I understand completely the difficulty of building a post-oil 21st-century economy (and society) in the midst of a very complex social, cultural and political region,” said Hazelkorn, who is also director of the Higher Education Policy Unit at Ireland’s Dublin Institute of Technology, in an interview. But she added “I would caution about unconsciously importing indicators from other world regions. Whatever is measured should reflect the values and objectives of the society. To do anything else is an abdication of national sovereignty.”
“Many emerging societies have simply grafted on rankings,” Hazelkorn said. “Macedonia is a good example of using [the Shanghai rankings] without any understanding that too narrow a focus on traditional academic activity (measuring publications and citations) could actually undermine ensuring that the research makes a real impact on the society and the economy.”
For Connell Monette, assistant vice president for academic affairs at Al Akhawayn University in Morocco, rankings can sometimes be an effective measure of success. “What matters more is the metrics which are used in the ranking system,” said Monette. “For the African continent, or for North Africa and the Mediterranean, the success of a university might legitimately be considered along different parameters than American or Asian or European institutions. African institutions might be measured in how they contribute to development in the local, regional and national levels: Are their graduates getting jobs, are they contributing to the development of local industries or agriculture?”
Academics in the developing world have long argued that education must be evaluated by its effect on entire nations. In the humanities this can mean fostering a sense of cohesion based on pride in a country’s history and culture. But paying attention to local needs can also shape perspectives on science. “African universities may not be competing internationally in terms of cutting-edge scientific research,” said Monette, “but they will be doing R&D (research and development) that impacts their country and continent in concrete and positive ways, like sustainable resource management or national human resource development.”
However you define success in education—through rankings, economic impact, surveys of graduate satisfaction, or simply as the development of happy, productive, well-informed citizens—assessment is impossible without data.
In the United States in particular the last 15 years have seen a running debate over “accountability”—using data to hold schools and universities accountable for how good a job they are doing at educating their students. Legislation has made high-stakes tests and the data that come from them a centerpiece of many students, teachers and professors’ lives.
At the level of schools, the passage of the “No Child Left Behind Act” in 2001 set in motion a series of escalating requirements that proponents said were intended to make sure some schools didn’t lag in performance but others said were intended to undermine public schools themselves. Schools were required to test students annually on reading and mathematics in the early years of their education and on science three times in their school careers. Schools had to issue public “report cards” on their own progress and poor-performing schools had to offer private tutoring and other support services for students.
Debates over whether schools can effectively measure more abstract skills such as critical thinking or the quality of writing have also been commonplace in the United States and elsewhere. Notably, a commonly used examination to measure readiness for university, the SAT, introduced a timed, mandatory essay in 2005 but dropped it this year.
In higher education, the United States has also been one of the places where the accountability debate has been most heated. In 2006, a government-appointed advisory group known as the Spellings Commission pushed for the creation of a national database containing information about universities and, much to the disappointment of jargon-haters everywhere, popularized the phrase “learning outcomes”—the concrete results of a university education.
Although a national database was ultimately created, it does not include measurements of “learning outcomes” or student ratings of universities, as the Spellings Commission had hoped. But many universities, fearing that the federal government might impose a national test of learning outcomes, did begin to measure what they had accomplished with their students.
Other countries, such as Canada and Finland, have de-emphasized high-stakes tests although paradoxically their students do very well on tests administered globally. (See a related article: “Lessons from Finland.”)
For the Arab world, greater openness, crucial as it is to improving Arab education, is only part of the answer. First there needs to be much more discussion of what each society expects from education—and from graduates. In other words before Arab schools and universities can start measuring success, they need to decide what it is that they want to measure. And to accept that not everything that counts towards a successful education can be counted.
Resources Related to this Story:
An article on Seeking Global Quality Controls for Universities that originally appeared in the U.S.-based Chronicle of Higher Education and was reprinted in Al-Fanar Media discusses if a single set of standards could determine quality in higher education.
The Crucial Role of Data in Arab Higher Education discusses a higher-education association that is trying to create a culture of data collection at Arab universities.