成人VR视频

Arab University Rankings 2021: methodology

To reflect the region’s unique characteristics, we make adjustments to the rigorous and robust criteria that underpin our World University Rankings to arrive at the inaugural Arab University Rankings

七月 20, 2021
Arabic ceramic plates
Source: iStock

The 成人VR视频 World University Rankings are the only global performance tables that judge research-intensive universities across all their core missions: teaching, research, knowledge transfer and international outlook.

The methodology behind the Arab University Rankings is based on the same comprehensive and trusted framework as the global table, but some important adjustments have been made and some new metrics have been included to reflect the features of universities in the Arab region.

We use 16 carefully calibrated performance indicators (listed below). The performance indicators are grouped into five areas: Teaching?(the?learning environment); Research?(volume, income and reputation); Citations?(research influence); Society?(knowledge transfer and impact) and International?outlook?(staff, students and research).

The?full methodology is published in the file at the bottom of this page.

Teaching?(the learning environment): 33%

  • Teaching reputation:?20%
  • Academic?staff-to-student ratio:?4%
  • Doctorates-awarded-to-undergraduate-degrees-awarded ratio:?2.5%
  • Doctorates-awarded-to-academic-staff ratio:?5%
  • Institutional income per academic staff:?1.5%

The Arab Reputation Survey that underpins this pillar was carried out between February and April 2021 and yielded more than 12,000?responses. Only academics in the Arab region and who had been cited in published papers were invited to participate. It?examined the perceived prestige of institutions in teaching and research. The teaching reputation metric is based on the total number of teaching votes obtained from this survey. Universities that received no?votes score zero for this metric.

The academic staff-to-student ratio is defined as the total full-time equivalent (FTE) number of staff employed in an academic post divided by FTE number of students in all years and of all programmes that lead to a degree, certificate, university credit or other qualification. This variable and the doctorates-awarded-to-undergraduate-degrees-awarded ratio are normalised after calculation.

As well as giving a sense of how committed an institution is to nurturing the next generation of academics, a?high proportion of postgraduate research students also suggests the provision of teaching at the highest level that is thus attractive to graduates and effective at?developing them. This metric is generated by dividing the total subject-weighted number of doctorates, by the total subject-weighted number of academic staff (as the volume of doctoral awards varies by discipline). The metric is normalised after calculation.

Institutional income indicates an institution’s general status and gives a broad sense of the infrastructure and facilities available to students and staff. This metric is generated by dividing the institutional income adjusted to purchasing-power parity (PPP), by the total number of academic staff. This variable is normalised after calculation.

Research?(volume, income and reputation):?33%

  • Research reputation:?25%
  • Research income per academic staff:?3%
  • Publications per staff:?5%

The most prominent indicator in this category looks at a university’s reputation for research excellence among its peers, based on the responses to our Arab Reputation Survey (see above).

Research income is scaled against academic staff numbers and adjusted for purchasing-power parity (PPP). This is a somewhat controversial indicator because it can be influenced by national policy and economic circumstances. But income is crucial to the development of world-class research, and because much of it is subject to competition and judged by peer review, our experts suggested that it was a valid measure. This indicator takes account of each university’s distinct subject profile, reflecting the fact that research grants in science subjects are often bigger than those awarded for the highest-quality social science, arts and humanities research.

To measure productivity, we count the number of publications published in the academic journals indexed by Elsevier’s Scopus database per scholar, scaled for institutional size and weighted by subject. This gives a sense of the university’s ability to get papers published in quality peer-reviewed journals. This measure includes a method to give credit for cross-subject research that results in papers being published in subjects where a university declares no?staff.

Citations?(research influence):?20%

Our research influence indicator looks at universities’ role in spreading new knowledge and ideas.

We examine research influence by capturing the average number of times a?university’s published work is cited by scholars globally. This year, our bibliometric data supplier Elsevier examined more than 108?million citations to 14.4?million journal articles, article reviews, conference proceedings, books and book chapters published over five?years. The data include more than 24,600 academic journals indexed by Elsevier’s Scopus database and all indexed publications between 2016 and 2020. Citations to these publications made in the six years from 2016 to 2021 are also collected.

Elsevier provide the field-weighted citation impact (FWCI) score for each institution, per subject and overall. The final citation score used in the ranking is determined by calculating the 75th percentile of the FWCI score for each institution, as opposed to the mean average for each institution as used in the World University Rankings. We believe that this gives a more stable measure over time and prevents the distorting effect of a few papers with very high numbers of citations.

The citations help to show us how much each university is contributing to the sum of human knowledge: they tell us whose research has stood out, has been picked up and built on by other scholars and, most importantly, has been shared around the global scholarly community to?expand the boundaries of our understanding, irrespective of discipline.

The data are normalised by Elsevier to reflect variations in citation volume between different subject areas. This means that institutions with high levels of research activity in subjects with traditionally high citation counts do?not gain an unfair advantage.

Society?(knowledge transfer and impact):?6%

  • Industry income per academic staff:?2%
  • Impact Rankings participation:?2%
  • Impact Rankings performance:?2%

A university’s ability to help industry with innovations, inventions and consultancy has become a core mission of the contemporary global academy. The industry income metric seeks to capture such knowledge-transfer activity by looking at how much research income an institution earns from industry (adjusted for PPP), scaled against the number of?academic staff it employs. This suggests the extent to which businesses are willing to pay for research and a university’s ability to attract funding in the commercial marketplace – useful indicators of institutional quality.

Another mission of many higher education institutions is to positively impact wider society. In this ranking, this is defined as their contributions towards the United Nations’ Sustainable Development Goals (SDGs), as measured by the THE Impact Rankings. Participation is measured by calculating the number of SDGs for which an institution is ranked in the latest THE Impact Rankings. A?maximum of 100?points is awarded to institutions that participate in four SDGs or more; 80?points for three SDGs; 60?points for two SDGs; 50?points for one SDG and zero?points if they are?not ranked.

For the performance metric, institutions that are ranked in the overall table of the latest Impact Rankings receive a metric score that is the same as their overall score in the table. Institutions that are not ranked in the overall table receive zero points for this metric.

International outlook?(staff, students, research):?8%

  • Proportion of international students: 2%
  • Proportion of international staff:?2%
  • International co-authorship:?2%
  • Collaboration within the Arab world: 2%

The ability of a university to attract undergraduates, postgraduates and faculty from all over the planet is key to its success on the world stage.

International students and staff are defined as those whose nationality differs from the country where the institution is based. The first two metrics are calculated as the total FTE number of international students or staff divided by the total FTE number of students or?staff.

In the third international indicator, we calculate the proportion of a university’s total research journal publications that have at least one international co-author and reward higher volumes. This metric accounts for an institution’s subject mix and uses the same five-year window as the “Citations: research influence” category.

The final metric is similar to the international collaboration measure, but it looks only at universities within countries in the Arab region: Algeria, Bahrain, Comoros, Djibouti, Egypt, Iraq, Jordan, Kuwait, Lebanon, Libya, Mauritania, Morocco, Oman, Palestine, Qatar, Saudi Arabia, Somalia, Sudan, Syria, Tunisia, the United Arab Emirates and?Yemen.


Data collection

Institutions provide and sign off their institutional data for use in the rankings. On?the rare occasions when a particular data point is?not provided, we enter a conservative estimate for the affected metric. By doing this, we avoid penalising an institution too harshly with a “zero” value for data that it overlooks or does not provide, but we do not reward it for withholding them.

Getting to the final result

Moving from a series of specific data points to indicators, and finally to a total score for an institution, requires us to match values that represent fundamentally different data. To do this, we use a standardisation approach for each indicator, and then combine the indicators in the proportions shown above.

The standardisation approach we use is based on the distribution of data within a particular indicator, where we calculate a cumulative probability function, and evaluate where a particular institution’s indicator sits within that function.

For all indicators except for the Arab Reputation Survey and Impact Rankings participation, we calculate the cumulative probability function using a version of Z-scoring. The distribution of the data in the Arab Reputation Survey requires us to use an exponential scoring function.

Rankings table

Precise ranks and overall scores are shown for the institutions ranked in the top?50. We then display banded ranks and overall scores for institutions in the rest of the table because the difference between their scores is not statistically significant.

A small number of institutions have “reporter” status and are listed at the bottom of the table. This means that they provided data but did?not meet our eligibility criteria to receive a?rank.

Exclusions

Universities must supply data to be included in the ranking. They also must have published more than 500 research publications between 2016 and?2020.

The Arab University Rankings considers only institutions that are based in the following countries: Algeria, Bahrain, Comoros, Djibouti, Egypt, Iraq, Jordan, Kuwait, Lebanon, Libya, Mauritania, Morocco, Oman, Palestine, Qatar, Saudi Arabia, Somalia, Sudan, Syria, Tunisia, the United Arab Emirates and?Yemen.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.