Methodology

The Academic Reputation Index is the centrepiece of the QS World University Rankings® carrying a weighting of 40%. It is an approach to international university evaluation that QS pioneered in 2004 and is the component that attracts the greatest interest and scrutiny. In concert with the Employer Reputation Index it is the aspect which sets this ranking most clearly apart from any other.

Weightings

QS World University Rankings
QS University Rankings: Arab Region
QS University Rankings: Asia
QS University Rankings: BRICS
QS University Rankings: EECA
QS University Rankings: Latin America

Background

QS World University Rankings® are based in part on hard data and part on factors drawn from two large global surveys – one of academics and another of employers. These are a key characteristic of the QS ranking approach and offer some key benefits.

QS has rejected many proposed criteria (e.g. financial metrics like research income) which cannot be independently validated, or are subject to exchange rate and business cycle fluctuations. Instead, our Advisory Board favour maintaining a strong emphasis on peer review, for important reasons:

Geographical/Cultural Diversity

Many evaluations seem based on a US model of what defines excellence in a university. Thus their results are often dominated by English-speaking, comprehensive, large universities with medical schools. A widely distributed pool of academic experts help identify excellence in areas unmapped by other metrics, resulting in institutions from 32 countries appearing in the top 200 in QS’ ranking.

Unbiased approach to different subjects

Without peer review, institutions with key strengths in Arts and Social Sciences might be penalised in the rankings simply because they don’t publish much research.

Contemporary Relevance

Founded as recently as 1991, HKUST came top in the QS Asian University Rankings in 2011. Nanyang Technological University was also formed in 1991, through merger, and is the top rated university in Asia within the classification of large, multidisciplinary, research intensive institutions without a medical school.

Reduced Language Bias

Respondents to our academic survey identify with research excellence both in English and their native languages, which avoids a bias towards internationally recognised journals published in English.

Statistical Validity

Over 62,000 academic respondents contributed to our 2013 academic results, four times more than in 2010. Independent academic reviews have confirmed these results to be more than 99% reliable.

Resistant to Data Manipulation

The peer review survey results are collected independently and in such numbers so as to become almost impossible to manipulate and very difficult for institutions to ‘game’.

Source of Respondents

The results are based on the responses to a survey distributed worldwide academics from a number of different sources:

Previous Respondents

QS has been conducting this work since 2004 – all previous respondents to our survey are invited to respond again to provide us with an updated viewpoint on the quality of universities in their broad field. In 2014, 1,724 previous respondents returned to revise their response.

World Scientific

www.worldscientific.com
An academic publishing company headquartered in Singapore, World Scientific publishes about 500 titles a year as well as 120 journals in a variety of fields. World Scientific holds a subscription database well in excess of 300,000 worldwide from which, until 2010, QS drew 180,000 active records. The effectiveness of this channel had dropped off over the years and in 2011 QS chose to redirect and draw on more records from the Mardev lists. Responses from this channel will remain in the sample for at least two years and World Scientific may be drawn upon in the future to fill any specific shortfalls.

Mardev-DM2

The data division of Reed Business Information, Mardev-DM2 is one of the world’s leading providers of business information and services. Mardev-DM2 controls access to IBIS (International Book Information Service), a database with over 1.2 million academic and library contacts. This channel has grown increasingly effective over the years and in 2014 QS drew 200,000 records.

Academic Signup

In 2010, QS initiated an Academic Signup process to enable the thousands of interested academics we meet each year to actively signal their interest in participation. Volunteers are screened to ensure institutions are not using the signup process to unduly influence the position of their own or rival institutions. Over 25,000 academics have signed up since the process was launched in February 2010.

Institution Supplied Lists

Since 2007, institutions have been invited to submit lists of employers for us to invite to participate in the Employer Survey. In 2010, that invitation was extended to lists of academics also. Since academics are not able to submit in favour of their own institution, the risk of bias is minimal, nonetheless submissions are screened and sampling applied where any institution submits more than 400 records. In 2014, nearly 400 institutions supplied lists contributing over 190,000 additional academic contacts.

Wherever sampling is required, respondents are selected randomly with a focus on delivering a balanced sample by discipline and geography. Naturally, all databases carry a certain amount of noise and email invitations do get passed on. Responses are screened to remove inappropriate responses prior to analysis.

The Survey

The survey has evolved since 2004 but largely follows the same general principles. Respondents are not asked to comment on the sciences if their expertise is in the arts. Respondents are not asked to comment on Europe if their knowledge is centred on Asia. The survey asks each respondent to specify their knowledge at the outset and then adapts based on their responses, the interactive list from which respondents are invited to select features only entries from their own region.

The survey is broken into the following sections:

Response Processing

The work is not done once the survey is designed and delivered. Once the responses are received a number of steps are taken to ensure the validity of the sample.

Five Year Aggregation

To boost the size and stability of the sample, QS combines responses from the last five years, where any respondent has responded more than once in the five year period, previous responses are discarded in favour of the latest numbers.

The survey samples contributing to this work have been growing substantially over the lifetime of the project, resulting in inherently more robust reputation measures. The decision has been taken to extend the window for both reputation measures to five years instead of three years previously, with responses from the earliest two years carrying a relative weight of 25% and 50% respectively.

Junk Filtering

Any online survey will receive a volume of test or speculative responses. QS runs an extensive filtering process to identify and discard responses of this nature.

Anomaly Testing

It is well documented on the basis of other high-profile surveys in higher education that universities are not above attempting to get respondents to answer in a certain fashion. QS run a number of processes to screen for any manipulation of survey responses. If evidence is found to suggest any institution has attempted to overtly influence their performance, any responses acquired through sources 4 and 5 (above) are discarded.

Results Analysis

Once the responses have all been processed, the fun really begins and it works as follows for each of our five subject areas:

  1. Devise weightings based on the regions with which respondents consider themselves familiar – weightings are (now) based only on completed responses for the given question. This is slightly complicated by the fact that respondents are able to relate to more than one region.
  2. Derive a weighted count of international respondents in favour of each institution ensuring any self-references are excluded.
  3. Derive a count of domestic respondents in favour of each institution adjusted against the number of institutions available for selection in that country and the total response from that country ensuring any self-references are excluded.
  4. Apply a straight scaling to each of these to achieve a score out of 100.
  5. Combine the two scores with a weighting 85% international, 15% domestic – these numbers were based on analysis of responses received before we separated the domestic and international responses three years ago, but a low weighting for domestic also reflects the fact that this is a world university ranking. We use 70:30 for the employer review.
  6. Square root the result – we do this to draw in the outliers but to a lesser degree than other methods might achieve – our intention is that excellence in one of our five areas should have an influence, but not too much of influence.
  7. Scale the rooted score to present a score out of 100 for the given faculty area.
  8. Combine the five totals with equal weighting to result in a final score which will then be standardized relative to the sample of institutions being used in any given context.