Posts

Evaluating rankings – Perception is everything… or is it?

by Ben Sowter

 

In a recent article in Inside Higher Ed, Philip Altbach commenting on the latest set of rankings from THE said “Why do Bilkent University in Turkey and the Hong Kong Baptist University rank ahead of Michigan State University, the University of Stockholm, or Leiden University in Holland? Why is Alexandria University ranked at all in the top 200?  These anomalies, and others, simply do not pass the “smell test.” Let it be hoped that these, and no doubt other, problems can be worked out.”

I would like to explore this notion of a “smell test” a little further, as in reality, it seems to be the single factor that defines the global credibility of any of these evaluations in the eyes of their many observers worldwide.

Read more

Simplicity is a valuable asset

by Ben Sowter

 

Rankings of anything seem very good at attracting attention, and the simpler they are the more easily and effectively they draw attention. If anyone has ever told a clever joke and then been called upon to explain it you will understand what I am referring to, by the time your audience has understood the joke it has ceased to fulfil its primary purpose – to make people laugh.

There is a great deal of chatter online at the moment – speculation about what newly released rankings might look like, what will be included and what won’t the new THE/Thomson exercise and the CHERPA project through the European Commission are generating particular speculation. The premise on which both of these projects are being discussed is that existing rankings do not fairly measure every aspect of university quality, nor do they recognise the differing nature and structure of different institutions.

Any ranking operated on a global level will be constrained by the quality and quantity of data available and the opinion of its designers and contributors. The worrying trend at the moment is that two underlying assumptions seem to be beginning to resonate throughout this discussion:

  1. There is a “perfect solution” – or at least one that will meet with dramatically higher acceptance than those already put forward, and;
  2. The stakeholders in rankings are like lemmings and will automatically accept the conclusions of one, or the average of all rankings they consider respectable

The CHE is at the opposite end of the scale to Shanghai and QS methodologies – it gathers masses of data from Germany and surrounding countries but doesn’t actually rank institutions or aggregate indicators – their argument, and perhaps it is a valid one, is that it is not for them to decide what represents quality in the mind of the average stakeholder – particularly students. Fair enough but, broadly speaking, the more proscriptive rankings are not making this assertion either. To my knowledge neither Shanghai Jiao Tong nor QS have ever asserted that their results should be used as the only input to important decisions – the responsibility for such decisions remain the responsibility of the individual making them. Read more

All is not quiet…

by Ben Sowter

 

Dear readers,

I must apologize for the silence for the last few months.

Most frequent readers of this blog will know that, the publishing arrangements for our rankings will be changing. In October, THE notified us and, on the same day, the world that they were no longer going to publish our rankings and would be doing something different.

We felt it best to let the initial news sink in before putting forward our position.

QS owns the intellectual property to the previous methodology and all previous data relating to the rankings that have been published in THE for the past six years. The QS World University Rankings will continue to be published in 2010, albeit through a number of new channels which we are working on. At present, there are no plans to alter the methodology, in fact it seems important to maintain some comparability in a time when a number of new and different interpretations are going to emerge. So in 2010, we are focused on improving our engagement with institutions, redesigning some of our data collection systems to be more user-friendly and intuitive, and our work in specific regional and discipline oriented contexts.

It has been extremely busy of late, and keeping the blog as up to date has been a clear challenge. I would welcome any contributions but we will try to keep things going a little more consistently in 2010.

2009 THE – QS World University Rankings Complete

by Ben Sowter

 

Apologies for being silent for so long. Not only have we been exceptionally busy compiling the latest version of the World University Rankings, but I am also pleased to announce that I have become a father for first time – further disrupting my plans to update frequently.

We have finished our final checking and analysis for the 2009 rankings and submitted the needful data to Times Higher Education for publication on 8th October – the Top 200 list will emerge on www.topuniversities.com on the 8th of October with the complete tables to follow on the 9th. What’s more, if all goes to plan, this year’s tables will be interactive, enabling users to add and remove columns, sort by different factors and compare institutions. Busy busy.

This year’s results will be the most stable yet, with the average change in position amongst the top 100 down to 7.4 places from last year’s 11.6 and across the top 500 an average shift of 25 places down from 31. Good news in general terms, then, but there are still some surprises, some interesting new entries, some regional shifts in influence and even changes in the top 10.

Technical challenges with tracking publications and citations for certain institutions.

by Ben Sowter

 

Tracking all the papers and citations data we need from the Scopus database to fuel our evaluations is quite a challenge and our process has always resulted in some discrepancies between the results we are using and the results that you can actually retrieve from Scopus at given moment. Scopus is an ever-changing database, not only are Elsevier working very hard to add more journals, in more languages and backfilling, but they are alos workign hard to concolidate affiliations and make it easier to retrieve all the data for a given author or institution. The database is vast, however, and the variants are many – apparently MIT, for example at point in time has 1,741 name variants. Additionally, as time goes by, more papers get published and more citations get filed.

Our analysis is based on “custom data” exported from Scopus at a fixed point in time, defined within fixed limits. We use the last five complete years for both papers and citations – that is to say we take a count of all papers published in the five years leading up to December 31st of the previous year and the total of any citations received during the same period. By the time the Times Higher Education – QS World University Rankings are published in October there will 10 more months of papers and publications appearing in the online version Scopus.

The custom data for the forthcoming 2009 analysis amounts to 18Gb of raw XML data – along with this Elsevier provide an affiliation table. This table is an improving lens that we can use to identify the mappings required to retrieve the aggregate data we need. We search this affiliation table for strings that match the universities (or their alternate names) in our database which returns a list of 8 digit affiliate id numbers which we can then use to retrieve and aggregate data from the main data set. If key names are missing from the affiliation table it is very difficult to identify and content that may exist in the main dataset.

Since the publication of the QS.com Asian University Rankings a couple of institutions have come forward and expressed that to some degree or another, data is missing for their institution. This has been discovered thanks to our practice of sharing a “fact file” with institutions prior to publication. Each of them are now working with QS to ensure that any shortfall is rectified in the future.

In future we will be splitting our fact file distribution into two with one comeing out long in advance of publication and then a media briefing which will include the ranking results two days prior to the publication date.