University Rankings: There can be no “right answer”.

by Ben Sowter


Part of the excitement of university and business school rankings is that there is no “ultimate solution”. At a symposium at Griffith University in 2007, Nian Cai Liu – who leads Shanghai Jiao Tong’s Academic Ranking of World Universities ( was posed the question, “Many rankings use surveys as a component of their methodology, why do you choose not to?”. His matter of fact response was “I’m an Engineer”.

But his team’s selection of Nobel Prizes or Highly Cited Authors as indicators are not intrinsically less questionable as measures of university quality in the round – which regardless of stated purpose, the results are often being used for. Three days ago at a comparable event in Madrid, organised by Isidro Aguillo and his Cybermetrics team, similar aspersions were cast on surveys in contrast with more “statistically robust” measures such as link analysis – as used for the Webometrics exercise ( The supposition was made that simply because the THE-QS exercise is the most “geographically generous” of the four global aggregate rankings, it must be some how wrong. And that maybe survey bias is to blame for that.

Well I have news for you. THEY ARE ALL WRONG.

The higher profile of universities in China and Hong Kong in THE-QS was cited as evidence for survey bias – whilst it is well-documented on our website that the survey response from China, in particular, is disproportinately low. We are working to remedy this, but it is clearly unlikely to strongly favour Chinese institutions – these universities are perfoming well due to the profile they are building outside China.

Despite the fact that these surveys are currently only conducted in English and Spanish, the survey compenents offer a much reduced language bias than seems to be implied from Nobel Prizes, citations (in any index), cybermetrics, publications in Nature & Science, highly cited authors and many other factors selected by other international evaluations. Respondents, even those responding in English, are cogniscent of the performance of other institutions in their own language – and this seems to be coming through in the results.

Sure, there are biases in the surveys, and the system overall – some are partially corrected for and some are not, but these exist in every other system too even if they may not be quite as immediately evident.

The THE-QS work is presented prolifically around the world – by myself, my colleagues, the THE and third-parties. We present it alongside the other exercises and are always careful to acknowledge that each has its value and each, including our own, has its pitfalls. NONE should be taken too seriously, and to date ALL bear some interest if viewed objectively.

The most entertaining input I have received since conducting this work came from an academic that systematically discredited all of the indicators we have been using but then concluded that, overall, he “liked what we were doing”. It is possible to do that with any of the systems out there – domestic, regional or global. The most savvy universities are using the rankings phenomenon to catalyze and establish keener performance evaluation internally at a faculty, department and individual staff member level. Driving it down to this level can help build actionable metrics as opposed to abstract statisitics and this can lead to a university being able to revolutionise their performance in education and research, and in time, as a side-effect rather than an objective, improve their performance in rankings.

Domestic rankings slow to reveal their past

by Ben Sowter


The THE-QS World University Rankings have been in existence now for five editions, the 2009 release will be the sixth. In some way, major or minor, the methodology has changed for each release…


Add employer review component
Collect full data on 200 additional universities

Switch from 10 years to 5 years for citations measure

Switch from ESI to Scopus for citations
Adopt new normalisation methodology
Insist upon FTE for all personnel metrics
Peer reviewers unable to select own institution

Separate int’l and domestic responses to surveys

Increased response levels to surveys
Tweaks to definitions
New institutions added to study

A ranking is a complex operation and the data available has evolved over time, as has our understanding of it. As we receive feedback and additional metrics become available the responsible thing to do is to integrate new developments with a view to improving the evaluation, making it more meaningful and insightful. The effects of these developments are vivible and reasonalby well-documented  – on our website you can find results going back to 2005.

Recently we have been doing some work on a communication project for a British university. Their leadership is concerned about the conclusions their governing body may infer from the results of rankings and have asked us to make a presentation to explain, in simple terms, some of the shortfalls of rankings and what a change in position might actually mean. In conducting this work, not only did we discover that the two major domestic rankings in the UK are subject to similarly profound “evolutions” in methodology, but that they also seem to be comparatively unforthcoming with their historical results.

On the first point, the introduction and further development of the National Student Survey has had a dramatic influence on results in 2008 and 2009. On the second, the only way we were able to track results over more than the last two editions was to purchase second hand copies of the associated books from Amazon and re-key all the results manually. Similalry the US News rankings seems not to clearly reveal results before the current year. In contrast, both the THE-QS and Shanghai Jiao Tong rankings provide results over a number of years.

Whilst, given the ongoing changes in methodology , it might be misleading to conduct detailed trend analysis over time, the Berlin Principles suggest that transparency is a key expectation for a responsibly conducted rankings. Surely that should include the complete history of a ranking and not simply the most recent edition.

Lies, damn lies and statistics…

by Ben Sowter


It was Benjamin Disraeli that coined the phrase, “There are three kinds of lies: lies, damn lies and statistics” and many a commentator would have us accept that any ranking of universities falls under at least one of his three headings. Rankings of anything are certainly a complex operation, selection of indicators is delicate and subjective, assignment of weightings similarly so. One thing is for sure, ranking universities (or almost anything else of importance) is an inexact science but, whilst each has its own drawbacks, the majority of exercises out there have some basis in sound reason.

When looking at any ranking of institutions, it is crucial that the reader takes the time to figure out exactly what is being measured. This months revelation of another global ranking system of universities perhaps reveals the consequences of setting forth on a ranking design with a clear and specific agenda. In addition to the THE-QS, Shanghai Jiao Tong, HEEACT (Taiwan), Webometrics and Leiden (CWTS) exercises, a non profit organisation in Russia, known simply as RatER have recently released an additional ranking which was briefly to be found on

At first glance the exercise looks compelling – the website is well laid out and simple to get around… but two crucial red flags are raised…

1. The Methodology

The methodology looks exceptionally detailed – using a far larger number of indicators than are used by any other ranking operating on a global scale. From our experience operating the THE-QS World University Rankings for five years, this level of detail is exceptionally difficult, perhaps impossible, to collect at that scale – especially given the provided list of institutions from which the ranking body claim to have been in contact. The only conclusion can be that many blanks have been left or arbitrary default values inserted.

2. The Results

Like most rankings the RatER exercise reveals some reasonably intuitive results – MIT tops their list, Harvard is at number 6… one place behind Moscow State University! It is not unlikely that other systems have been designed in such a way that leading Russian institutions may not feature highly – but the four other systems publishing an aggregate ranking Harvard is in the top 2, and in most it has been for some time. Moscow State? Not so much.

There are a very large number of universities in the world and even a considerable number of good universities. For its particular strengths I have little doubt that Moscow State is an excellent institution, but it is clear that this is a ranking that has been designed to dramatically inflate the performance of Russian institutions.

Whilst there may not be any such obvious evidence of a strong agenda in other rankings – this may serve as a caution to anyone placing too much stock in the results of ranking system – even our own. Virtually anything can be achieved with the manipulation of statistics – either deliberately or accidentally. For each individual stakeholder in a university different things are important – take what you need from rankings… and then back it up with your own research to make sure you’re choosing the right institution for you.