2010 QS World University Rankings results

Here is a fuller look – positions 300 to 320 – at our 2010 results; for more rankings (including the full list of the top 500 universities) go to www.topuniversities.com.

2010 2009 Institution Name
RANK RANK
300= 322 Università di Pisa
300= Tokyo Medical and Dental University
302 299 Massey University
303 401-450 University of Jyväskylä
304 302= Jagiellonian University
305 273= University of Essex
306 259= University of Utah
307 234= Ateneo de Manila University
308 328= University of Eastern Finland
309 314= Universiti Sains Malaysia (USM)
310 346= Ruhr-Universität Bochum
311 335 Indian Institute of Technology Kharagpur (IITKGP)
312 346= Universität Konstanz
313 331= University of Oulu
314 262= University of the Philippines
315 357= Universität des Saarlandes
316= 342 Universität Bielefeld
316= 314= University of Waikato
318 307= Chiba University
319 345 Universiti Putra Malaysia (UPM)
320 326= University of Tasmania

2010 QS World University Rankings® Video – US and Canadian focussed

Want a deeper insight into the 2010 QS World University Ranking results?

Nunzio Quacquarelli, Managing Director of QS Quacquarelli Symonds, gives a brief description of the QS World University Rankings®.

Ben Sowter, Head of the QS Intelligence Unit, talks about the overall trends in this year’s Rankings and discusses the movements of Canadian Universities.

John O’Leary, executive member of the QS Academic Advisory Board, talks about the performance of US universities.

Latin America, an under explored territory for global education.

Latin America may not be considered a first choice by international students for academic exchange, and global universities do not seem to consider this part of the world as a priority for the development of exchange partnerships.  Why this is the case leads one to ask the following:  is there a global understanding of the Latin American educational systems, quality of their programs or administration processes, or is it merely a matter of location? Perhaps, Latin America is seen more as a holiday hotspot rather than a strategic choice to strengthen career prospects.

There are some interesting facts about the region. Public expenditure in education is significant in Cuba and Bolivia where it makes up 9.1% and 6.1% of their national budget respectively. These represent higher proportions than in the  USA (5.3%), UK (5.6%), and France (5.7%) in the same year of reference. Furthermore, Mexico, Costa Rica, Colombia, Brazil and Paraguay all invest at least 4% in education. Mexico, in particular, has made major and consistent investments in education during recent years; their proportion of GDP in 2005 was 5.5%.

In most cases, universities that profile in the QS World RankingTM Top 400 are based in Brazil, Mexico, Argentina and Chile. For example, UNAM, Tecnologico de Monterrey, Universidad Austral,  Universidad de Sao Paulo, UNICAMP and Universidad de Chile.

As indicated by the Chilean journal America Economia in their annual ranking for business schools in the region, there are highly qualified and recognised business schools for almost every country of the region among others, (see table below), that foster exchange programs with well known universities particularly in Europe and USA, such as ESADE in Spain, HEC in France, HHL in Germany and any others in the USA as Arizona State University, Tulane University, University of Texas at Austin among others.

Country University
Colombia Universidad de los Andes
Costa Rica INCAE
Chile Pontificia Universidad Católica de Chile (PUC)
Brazil Fundação Getulio Vargas
México Instituto Tecnológico Autónomo de México (ITAM)
Venezuela Instituto de Estudios Superiores de Administracion
Argentina IAE

 
Latin-America’s largest populations are mainly concentrated in Brazil, Mexico, Colombia and Argentina, representing 70% of the region, with 396.5 million inhabitants. Despite the world economic crisis over the last year, the region has experienced an important growth of 4% GDP on average, with Peru, Panama and Argentina growing at 9.9%, 9.2% and 6.8% respectively.

Spanish is the second most spoken language in the world as mother tongue, after Chinese, by 329 million people in 44 countries and these figures will likely increase as there are already around 14 million students around the world learning Spanish as a second language by 2008**. This number will also rapidly increase since, in 2010, Brazil – one of the most populated and market oriented countries in the region – made Spanish a compulsory language to learn in classrooms from the age of 7. It is expected that in just a few short years an additional 41 million Brazilians under 17 will be able to read and speak Spanish. In the United States, Spanish is the primary language spoken at home by over 34 million people aged 5 or older, representing over 12% of the population.  In states such as New Mexico, California and Texas more than 30% of the population speaks Spanish***. Read more

Simplicity is a valuable asset

by Ben Sowter

 

Rankings of anything seem very good at attracting attention, and the simpler they are the more easily and effectively they draw attention. If anyone has ever told a clever joke and then been called upon to explain it you will understand what I am referring to, by the time your audience has understood the joke it has ceased to fulfil its primary purpose – to make people laugh.

There is a great deal of chatter online at the moment – speculation about what newly released rankings might look like, what will be included and what won’t the new THE/Thomson exercise and the CHERPA project through the European Commission are generating particular speculation. The premise on which both of these projects are being discussed is that existing rankings do not fairly measure every aspect of university quality, nor do they recognise the differing nature and structure of different institutions.

Any ranking operated on a global level will be constrained by the quality and quantity of data available and the opinion of its designers and contributors. The worrying trend at the moment is that two underlying assumptions seem to be beginning to resonate throughout this discussion:

  1. There is a “perfect solution” – or at least one that will meet with dramatically higher acceptance than those already put forward, and;
  2. The stakeholders in rankings are like lemmings and will automatically accept the conclusions of one, or the average of all rankings they consider respectable

The CHE is at the opposite end of the scale to Shanghai and QS methodologies – it gathers masses of data from Germany and surrounding countries but doesn’t actually rank institutions or aggregate indicators – their argument, and perhaps it is a valid one, is that it is not for them to decide what represents quality in the mind of the average stakeholder – particularly students. Fair enough but, broadly speaking, the more proscriptive rankings are not making this assertion either. To my knowledge neither Shanghai Jiao Tong nor QS have ever asserted that their results should be used as the only input to important decisions – the responsibility for such decisions remain the responsibility of the individual making them. Read more

QS Classifications

by Ben Sowter

 

The THE – QS World University Rankings attract a great deal of interest and scrutiny each year, one piece of frequent feedback is the comparing “apples with oranges” observation. The simple fact is that the London School of Economics bears little resemblance to Harvard University in terms of funding, scale, location, mission, output or virtually any other aspect one may be called upon to consider – so how is it valid to include them both in the same ranking. They do, however, both aim to teach students and produce research and it has always been the assertion of QS and Times Higher Education that this ought to provide a sufficient basis for comparison.

In essence, it is a little like comparing sportspeople from different disciplines in a “World’s greatest sportsperson” or “World’s greatest Olympian” ranking which so frequently emerge. How is it possible to compare a swimmer with a rower with a boxer with a football player? Yet such comparisons have fuelled passionate conversation all over the world. The difference, perhaps, is that in that context those talking are aware of who represents what sport. That is where the classifications come in – they are a component appearing in the tables from 2009 that help the user distinguish the boxers from footballers, so to speak.

The Berlin Principles (a set of recommendations for the delivery of university rankings) assert that any comparative exercise ought to take into account the different typologies of its subject institutions, whilst an aggregate list will continue to be produced it will now feature labels so that institutions (and their stakeholders) of different types can easily understand their performance not only overall but also with respect to institutions of a similar nature.

Based very loosely on the Carnegie Classification of Institutions of Higher Education in the US, but operated on a much simpler basis, these classifications take into account three key aspects of each university to assign their label.

  1. Size – based on the (full time equivalent) size of the degree-seeking student body. Where an FTE number is not provided or available, one will be estimated based on common characteristics of other institutions in the country or region in question
  2. Subject Range – four categories based on the institution’s provision of programs in the five broad faculty areas used in the university rankings. Due to radically different publication habits and patterns in medicine, an additional category is added based on whether the subject institution has a medical school
  3. Research Activity Level – four levels of research activity evaluated based on the number of documents retrievable from Scopus in the five year period preceding the application of the classification. The thresholds required to reach the different levels are different dependent on the institutions pre-classification on aspects 1 and 2.

This will result in each subject institution being grouped under a simple alpha-numeric classification code (i.e. A1 or H3. Table 1 lays out the thresholds for the application of the classifications. Read more

Technical challenges with tracking publications and citations for certain institutions.

by Ben Sowter

 

Tracking all the papers and citations data we need from the Scopus database to fuel our evaluations is quite a challenge and our process has always resulted in some discrepancies between the results we are using and the results that you can actually retrieve from Scopus at given moment. Scopus is an ever-changing database, not only are Elsevier working very hard to add more journals, in more languages and backfilling, but they are alos workign hard to concolidate affiliations and make it easier to retrieve all the data for a given author or institution. The database is vast, however, and the variants are many – apparently MIT, for example at point in time has 1,741 name variants. Additionally, as time goes by, more papers get published and more citations get filed.

Our analysis is based on “custom data” exported from Scopus at a fixed point in time, defined within fixed limits. We use the last five complete years for both papers and citations – that is to say we take a count of all papers published in the five years leading up to December 31st of the previous year and the total of any citations received during the same period. By the time the Times Higher Education – QS World University Rankings are published in October there will 10 more months of papers and publications appearing in the online version Scopus.

The custom data for the forthcoming 2009 analysis amounts to 18Gb of raw XML data – along with this Elsevier provide an affiliation table. This table is an improving lens that we can use to identify the mappings required to retrieve the aggregate data we need. We search this affiliation table for strings that match the universities (or their alternate names) in our database which returns a list of 8 digit affiliate id numbers which we can then use to retrieve and aggregate data from the main data set. If key names are missing from the affiliation table it is very difficult to identify and content that may exist in the main dataset.

Since the publication of the QS.com Asian University Rankings a couple of institutions have come forward and expressed that to some degree or another, data is missing for their institution. This has been discovered thanks to our practice of sharing a “fact file” with institutions prior to publication. Each of them are now working with QS to ensure that any shortfall is rectified in the future.

In future we will be splitting our fact file distribution into two with one comeing out long in advance of publication and then a media briefing which will include the ranking results two days prior to the publication date.

University Rankings: There can be no “right answer”.

by Ben Sowter

 

Part of the excitement of university and business school rankings is that there is no “ultimate solution”. At a symposium at Griffith University in 2007, Nian Cai Liu – who leads Shanghai Jiao Tong’s Academic Ranking of World Universities (www.arwu.org) was posed the question, “Many rankings use surveys as a component of their methodology, why do you choose not to?”. His matter of fact response was “I’m an Engineer”.

But his team’s selection of Nobel Prizes or Highly Cited Authors as indicators are not intrinsically less questionable as measures of university quality in the round – which regardless of stated purpose, the results are often being used for. Three days ago at a comparable event in Madrid, organised by Isidro Aguillo and his Cybermetrics team, similar aspersions were cast on surveys in contrast with more “statistically robust” measures such as link analysis – as used for the Webometrics exercise (www.webometrics.info). The supposition was made that simply because the THE-QS exercise is the most “geographically generous” of the four global aggregate rankings, it must be some how wrong. And that maybe survey bias is to blame for that.

Well I have news for you. THEY ARE ALL WRONG.

The higher profile of universities in China and Hong Kong in THE-QS was cited as evidence for survey bias – whilst it is well-documented on our website that the survey response from China, in particular, is disproportinately low. We are working to remedy this, but it is clearly unlikely to strongly favour Chinese institutions – these universities are perfoming well due to the profile they are building outside China.

Despite the fact that these surveys are currently only conducted in English and Spanish, the survey compenents offer a much reduced language bias than seems to be implied from Nobel Prizes, citations (in any index), cybermetrics, publications in Nature & Science, highly cited authors and many other factors selected by other international evaluations. Respondents, even those responding in English, are cogniscent of the performance of other institutions in their own language – and this seems to be coming through in the results.

Sure, there are biases in the surveys, and the system overall – some are partially corrected for and some are not, but these exist in every other system too even if they may not be quite as immediately evident.

The THE-QS work is presented prolifically around the world – by myself, my colleagues, the THE and third-parties. We present it alongside the other exercises and are always careful to acknowledge that each has its value and each, including our own, has its pitfalls. NONE should be taken too seriously, and to date ALL bear some interest if viewed objectively.

The most entertaining input I have received since conducting this work came from an academic that systematically discredited all of the indicators we have been using but then concluded that, overall, he “liked what we were doing”. It is possible to do that with any of the systems out there – domestic, regional or global. The most savvy universities are using the rankings phenomenon to catalyze and establish keener performance evaluation internally at a faculty, department and individual staff member level. Driving it down to this level can help build actionable metrics as opposed to abstract statisitics and this can lead to a university being able to revolutionise their performance in education and research, and in time, as a side-effect rather than an objective, improve their performance in rankings.

Domestic rankings slow to reveal their past

by Ben Sowter

 

The THE-QS World University Rankings have been in existence now for five editions, the 2009 release will be the sixth. In some way, major or minor, the methodology has changed for each release…

2004
Launch

2005
Add employer review component
Collect full data on 200 additional universities

2006
Switch from 10 years to 5 years for citations measure

2007
Switch from ESI to Scopus for citations
Adopt new normalisation methodology
Insist upon FTE for all personnel metrics
Peer reviewers unable to select own institution

2008
Separate int’l and domestic responses to surveys

Ongoing
Increased response levels to surveys
Tweaks to definitions
New institutions added to study

A ranking is a complex operation and the data available has evolved over time, as has our understanding of it. As we receive feedback and additional metrics become available the responsible thing to do is to integrate new developments with a view to improving the evaluation, making it more meaningful and insightful. The effects of these developments are vivible and reasonalby well-documented  – on our website www.topuniversities.com you can find results going back to 2005.

Recently we have been doing some work on a communication project for a British university. Their leadership is concerned about the conclusions their governing body may infer from the results of rankings and have asked us to make a presentation to explain, in simple terms, some of the shortfalls of rankings and what a change in position might actually mean. In conducting this work, not only did we discover that the two major domestic rankings in the UK are subject to similarly profound “evolutions” in methodology, but that they also seem to be comparatively unforthcoming with their historical results.

On the first point, the introduction and further development of the National Student Survey has had a dramatic influence on results in 2008 and 2009. On the second, the only way we were able to track results over more than the last two editions was to purchase second hand copies of the associated books from Amazon and re-key all the results manually. Similalry the US News rankings seems not to clearly reveal results before the current year. In contrast, both the THE-QS and Shanghai Jiao Tong rankings provide results over a number of years.

Whilst, given the ongoing changes in methodology , it might be misleading to conduct detailed trend analysis over time, the Berlin Principles suggest that transparency is a key expectation for a responsibly conducted rankings. Surely that should include the complete history of a ranking and not simply the most recent edition.