QS World University Rankings® are based in part on hard data and part on factors drawn from two large global surveys – one of academics and another of employers. These are a key characteristic of the QS ranking approach and offer some key benefits.
QS has rejected many proposed criteria (e.g. financial metrics like research income) which cannot be independently validated, or are subject to exchange rate and business cycle fluctuations. Instead, our Advisory Board favour maintaining a strong emphasis on peer review, for important reasons:
Source of Respondents
The results are based on the responses to a survey distributed worldwide academics from a number of different sources:
An academic publishing company headquartered in Singapore, World Scientific publishes about 500 titles a year as well as 120 journals in a variety of fields. World Scientific holds a subscription database well in excess of 300,000 worldwide from which, until 2010, QS drew 180,000 active records. The effectiveness of this channel had dropped off over the years and in 2011 QS chose to redirect and draw on more records from the Mardev lists. Responses from this channel will remain in the sample for at least two years and World Scientific may be drawn upon in the future to fill any specific shortfalls.
The data division of Reed Business Information, Mardev-DM2 is one of the world’s leading providers of business information and services. Mardev-DM2 controls access to IBIS (International Book Information Service), a database with over 1.2 million academic and library contacts. This channel has grown increasingly effective over the years and in 2011 QS drew 200,000 records.
Wherever sampling is required, respondents are selected randomly with a focus on delivering a balanced sample by discipline and geography. Naturally, all databases carry a certain amount of noise and email invitations do get passed on. Responses are screened to remove inappropriate responses prior to analysis.
The survey has evolved since 2004 but largely follows the same general principles. Respondents are not asked to comment on the sciences if their expertise is in the arts. Respondents are not asked to comment on Europe if their knowledge is centred on Asia. The survey asks each respondent to specify their knowledge at the outset and then adapts based on their responses, the interactive list from which respondents are invited to select features only entries from their own region.
The survey is broken into the following sections:
Region – regional knowledge responses are grouped into three supersets that define the list of institutions from which the respondent can select, these are Americas; Asia, Australia & New Zealand; and Europe, Middle East & Africa
Faculty Area – respondents are asked to select one or more faculty areas in which they consider their expertise to lie. These are Arts & Humanities; Engineering & Technology; Life Sciences & Medicine; Natural Sciences; and Social Sciences. Sections 3 and 4 below are repeated for each faculty area selected.
Field – respondents are asked to select up to two specific fields that best define their academic expertise
Top Domestic Institutions
Top International Institutions
The work is not done once the survey is designed and delivered. Once the responses are received a number of steps are taken to ensure the validity of the sample.
- Three Year Aggregation
To boost the size and stability of the sample, QS combines responses from the last three years, where any respondent has responded more than once in the three year period, previous responses are discarded in favour of the latest numbers.
- Junk Filtering
Any online survey will receive a volume of test or speculative responses. QS runs an extensive filtering process to identify and discard responses of this nature.
- Anomaly Testing
It is well documented on the basis of other high-profile surveys in higher education that universities are not above attempting to get respondents to answer in a certain fashion. QS run a number of processes to screen for any manipulation of survey responses. If evidence is found to suggest any institution has attempted to overtly influence their performance, any responses acquired through sources 4 and 5 (above) are discarded.
Once the responses have all been processed, the fun really begins and it works as follows for each of our five subject areas:
1Devise weightings based on the regions with which respondents consider themselves familiar – weightings are (now) based only on completed responses for the given question. This is slightly complicated by the fact that respondents are able to relate to more than one region.
2Derive a weighted count of international respondents in favour of each institution ensuring any self-references are excluded
3Derive a count of domestic respondents in favour of each institution adjusted against the number of institutions available for selection in that country and the total response from that country ensuring any self-references are excluded
4Apply a straight scaling to each of these to achieve a score out of 100
5Combine the two scores with a weighting 85% international, 15% domestic – these numbers were based on analysis of responses received before we separated the domestic and international responses three years ago, but a low weighting for domestic also reflects the fact that this is a world university ranking. We use 70:30 for the employer review.
6Square root the result – we do this to draw in the outliers but to a lesser degree than other methods might achieve – our intention is that excellence in one of our five areas should have an influence, but not too much of influence
7Scale the rooted score to present a score out of 100 for the given faculty area
8Combine the five totals with equal weighting to result in a final score which will then be standardized relative to the sample of institutions being used in any given context
The process outlined above has a number of important implications. The steps (1) and (2) ensure that no single region is given greater emphasis over another and the steps (3) and (4) serve to ensure that high level of response from any single country do not systematically benefit all the institutions from that country.