The QS World University Rankings® is amongst the most established, respected and popular of its kind. Yet it relies heavily on the aggregated outcomes of two major international surveys. Occasionally the validity of those surveys, or indeed the notion of using surveys at all, is called into question and today, it seems is one of those times.
The individual response of a single academic is, by its very nature, subjective, although perhaps no more so than a single citation; but gather 46,000 of them and we’re really on to something. The QS academic reputation measure is amongst the most stable of our measures, it is discipline independent and the survey is conducted in 13 languages. The quality control and compilation methodology is complex, balancing geographical and discipline patterns and emphasising international renown. The method is described in more detail here. All measures so far put forward to compare global universities have their drawbacks, reflecting a historical view is perhaps one of the most commonly put to us here, but in actuality our top 100 universities are getting younger and institutions under 30 years old are among those to shine and the measure overcomes language and discipline biases inherent in bibliometric analysis.
But… is the survey open to manipulation? A question raised today by Inside Higher Ed.
Our answer is “no” and what follows is a list of processes, checks, balances and controls that lead us to believe this to be the case.
An invitation-only survey would be no less vulnerable, particularly where the criteria for selection and sampling were well-known, with inevitably smaller sample sizes the results would, in fact, be easier to influence and if its invitation-only design led to complacency over further more robust controls and monitors, confidence would likely be lessened rather than increased. Furthermore, selection bias would almost certainly cause further obtuse effects.
We have been doing this for ten years and have devised a set of robust processes and procedures to ensure the validity of the resulting measures. The reasons are thus:
1) Strict policy for participation
As a policy, it is not permitted to solicit or coach specific responses from expected respondents to any survey contributing to any QS ranking. Should the QS Intelligence Unit receive evidence of such activity occurring, institutions will receive one written warning, after which responses to that survey on behalf of the subject institution may be excluded altogether for the year in question. Not only are responses found to be invalid discounted from consideration, but any institution found to be engaging in such activity will attract a further penalty in the compilation of the results for the given indicator. Full ‘Policy and Conditions’ statement here
2) Inability to select one’s own institution
We encourage the respondent to voice their genuine opinion on up to 40 institutions (10 domestic and 30 international). Respondents may not select their own institution.
3) Sign-up screening processes
The QS Intelligence Unit checks every request to participate in the QS Global Academic Survey through the academic sign-up facility for validity and authenticity. Only those who have passed the screening process will be contacted.
4) Sophisticated anomaly detection algorithms
The QS Intelligence Unit routinely runs anomaly detection routines on its survey responses. These algorithms are designed to detect unusual jumps in performance or atypical response patterns. Responses not meeting certain parameters are removed, and institutions showing unusual or unlikely gains are scrutinized in-depth.
5) Market-leading sample size
The QS Global Academic Survey attracts a market-leading sample size. With over 46,000 academic responses considered, the statistical robustness of the survey is substantial. Only a large, concerted, and therefore detectable, effort to influence the results is likely to have an effect.
6) Academic integrity
Whilst there will be exceptions in any population, academics typically place great value on their “academic integrity”. We believe the vast majority of our respondents give us their unfettered opinion of the institutions they consider strongest in their field, regardless of whether or not any external party has tried to influence their decision through direct or indirect means of communication.
7) International emphasis
The survey analysis is designed so that international responses are strongly favoured over domestic responses. Influencing international responses is a much more difficult task than affecting the opinion of domestic academics, who are more likely to be familiar with universities in their own country.
8) Three-year sampling
Responses are combined with those from the previous two years, eliminating the older response from anyone who has submitted in more than one year. This diminishes the influence of any changes in response patterns in the current year. To have a substantial impact, any effort to influence the results would have to be sustained for three years.
9) Watch List
The QS Intelligence Unit maintains a list of institutions which have qualified themselves for additional scrutiny in our process, known as the “Watch List”. Any institution seen to be attempting to influence the outcome is automatically added to this list. When we conduct our analysis we will examine responses in favour of Watch List institutions with particular care, to ensure that they receive no undue advantage.
10) QS Global Academic Advisory Board
The QS Global Academic Advisory Board consists of thirty esteemed members of the international academic community whose task is to uphold the integrity of the methodology behind any of the QS rankings. Executive members of the board include John O’Leary, Martin Ince, Ben Sowter and Nunzio Quacquarelli, the four originators behind the World University Rankings when it was first launched in 2004. Collectively, these executive members have over 50 years’ experience in ranking universities.