Inclusiveness vs. Stability

Inclusiveness vs. Stability: Can we achieve both?

In 2013 we have added 106 new universities to the QS World University Rankings®. Indeed since we have also extended the number of monitored universities which do not qualify for an overall ranking (i.e. specialist subject and graduate schools), the total number added to the study is over 130. This is nothing new, indeed as our resources have increased, our survey responses have grown and institutions have been more forthcoming in contacting us to engage with our evaluations and provide data it has been an ongoing and deliberate policy to increase the number of institutions featured.

The table below shows the number of institutions featured in the QS World University Rankings® for each of its ten editions and percentage increase each year. With the exception of 2010 there has always been an increase. In 2013 the increase is proportionally less than the increase made in 2011.

Institutions included in the QS World University Rankings®
YearInstitutions% increase
2004300
200551371.0%
20065160.6%
20075435.2%
20085613.3%
20095833.9%
2010581-0.3%
201167416.0%
20127298.2%
201383414.4%

Whether or not universities qualify for inclusion is often not clear until the late stages of the analysis and thus it is difficult to give substantial advance warning of the increase in the size of our exercise. Naturally, however, given that performance in the ranking is relative, the addition of substantial numbers of universities has an inevitable effect on the apparent performance of universities already included. What we have tried to do, is notify those universities most affected by this at the time their fact file has been delivered – two weeks prior to publication.

Clearly it is important, if possible and reasonable, to provide insight to stakeholders on a larger proportion of the world’s 20,000+ universities and as our data strengthens, the system overall seems sufficiently robust and resilient to facilitate this. Indeed despite the addition of these universities, the average change in position year on year among the top 600 has improved between 2012 and 2013 from 21.2 places in 2012 to 19.4 places in 2013. Indeed this trend can be seen all the way down the table with the average change in position among the top 100 now standing at just 4.4 places.

Clearly it is important, if possible and reasonable, to provide insight to stakeholders on a larger proportion of the world’s 20,000+ universities and as our data strengthens, the system overall seems sufficiently robust and resilient to facilitate this.

However, the fortunes of a minority of universities – particularly those that are particularly weak in any given indicator (thus increasing the likelihood of displacement when new institutions are included) have been affected more severely, and not necessarily through an objective decline in their own performance. So what is the surest course forward, do we continue to press on towards 1,000 universities over the next couple of years so that more universities and more countries can be richly featured herein at moderate cost to overall stability? Or do we prioritize further stability for the institutions already featured at the cost of greater inclusiveness?

What do you think?

University

Global geographies of higher education: The perspective of world university rankings

Over the past decade, annually published world university rankings have captured the attention of university managers, policy makers, employers, academics and the wider public. Many national governments have implemented neoliberal reforms in higher education and increased the autonomy of their universities to enhance international competitiveness. Several universities have adjusted their strategic plans to climb up the ranks, while fee-paying international students often consult such league tables as a guide of where they can expect to receive ‘value for money’.

Read more

QS Classifications refined to embrace age dimension for 2011 and beyond

First seen alongside the 2009 edition of the QS World University Rankings® the QS Classifications have become part of the annual fabric. In 2010 we evolved the notation to be a little more versatile and (we hope) intuitive, and in 2011 we have added a fourth dimension – that of age.

These classifications were initially inspired by the far more sophisticated Carnegie Classification system familiar to those with knowledge of higher education in the US. The inclusion of the age dimension was inspired by some fascinating work we have conducted this year on the influence of institutional age on development trajectory and performance in international evaluations.

The age dimension breaks down like this:

Age
5 Historic >= 100 years old
4 Mature < 100 years old
3 Established < 50 years old
2 Young < 25 years old
1 New < 10 years old

Age is currently determined by largely self-submitted data on foundation year. We are aware that there can be some ambiguity here – is foundation year the year a foundation stone was laid for the first university building; the year the institution got university status; the date of the last merger that led to the institution taking its present shape; the year of first student intake; or the year the first cohort graduated. We will be working with our system and institutions themselves to refine this measure over the coming months.

The complete methodology for the classifications is available here.

QS Classifications

by Ben Sowter

 

The THE – QS World University Rankings attract a great deal of interest and scrutiny each year, one piece of frequent feedback is the comparing “apples with oranges” observation. The simple fact is that the London School of Economics bears little resemblance to Harvard University in terms of funding, scale, location, mission, output or virtually any other aspect one may be called upon to consider – so how is it valid to include them both in the same ranking. They do, however, both aim to teach students and produce research and it has always been the assertion of QS and Times Higher Education that this ought to provide a sufficient basis for comparison.

In essence, it is a little like comparing sportspeople from different disciplines in a “World’s greatest sportsperson” or “World’s greatest Olympian” ranking which so frequently emerge. How is it possible to compare a swimmer with a rower with a boxer with a football player? Yet such comparisons have fuelled passionate conversation all over the world. The difference, perhaps, is that in that context those talking are aware of who represents what sport. That is where the classifications come in – they are a component appearing in the tables from 2009 that help the user distinguish the boxers from footballers, so to speak.

The Berlin Principles (a set of recommendations for the delivery of university rankings) assert that any comparative exercise ought to take into account the different typologies of its subject institutions, whilst an aggregate list will continue to be produced it will now feature labels so that institutions (and their stakeholders) of different types can easily understand their performance not only overall but also with respect to institutions of a similar nature.

Based very loosely on the Carnegie Classification of Institutions of Higher Education in the US, but operated on a much simpler basis, these classifications take into account three key aspects of each university to assign their label.

  1. Size – based on the (full time equivalent) size of the degree-seeking student body. Where an FTE number is not provided or available, one will be estimated based on common characteristics of other institutions in the country or region in question
  2. Subject Range – four categories based on the institution’s provision of programs in the five broad faculty areas used in the university rankings. Due to radically different publication habits and patterns in medicine, an additional category is added based on whether the subject institution has a medical school
  3. Research Activity Level – four levels of research activity evaluated based on the number of documents retrievable from Scopus in the five year period preceding the application of the classification. The thresholds required to reach the different levels are different dependent on the institutions pre-classification on aspects 1 and 2.

This will result in each subject institution being grouped under a simple alpha-numeric classification code (i.e. A1 or H3. Table 1 lays out the thresholds for the application of the classifications. Read more

World University Classifications?

by Ben Sowter

 

I imagine this is too simple an idea to be particularly practical but would welcome feedback either way.

The THE-QS World University Rankings, amongst others, are frequently criticized in all sorts of ways, some fair and some not.

One of the most common observations is the failure of most aggregate ranking systems, whether international or domestic, to acknowledge the different missions and typologies of institutions.

In the case of the THE-QS exercise, large institutions are likely to be advantaged in terms or recognition whilst smaller ones may have greater ability to perform in some of the ratio based indicators.

In the US we frequently refer to the Carnegie classification system to better understand the nature of institutions that are featured in the rankings. What if we were to apply a similar, albeit simpler, concept to universities at a world level and include a classification alongside all ranking results.

Classifications might include:

Type A: Large, fully comprehensive

More than 10,000 students. Offer programs in all 5 of our broad faculty areas. Has a medical school.

(i) High Research – Over 5,000 papers in 5 year Scopus extract.
(ii) Moderate Research – 1,000-4,999 papers in 5 yyear Scopus extract
(iii) Low Research – 100-999 papers in 5 year Scopus extract
(iv) Negligible Research – Less than 100 papers in 5 year Scopus extract

Type B: Large, comprehensive

More than 10,000 students, operates programs in ALL of our 5 broad faculty areas. Has no medical school.

(i-iv) Reduced thresholds

Type C: Large, focused

More than 10,000 students. Operates programs in 3 or 4 of our broad faculty areas.

(i-iv) Reduced Thresholds

Type D: Large, specialist

More than 10,000 students. Operates programs in 1 or 2 of our broad faculty areas

(i-iv) Research thresholds set against mean or median for stated specialist faculty areas

Types E-H: same as above but for medium sized institutions. 4,000-10,000 students

Types H-K: Same as above but for small institutions – less than 4,000 students

A (u) or (p) could be added to denote institutions that only offer programs at either undergraduate or postgraduate level.

This is unlikely to, yet, be exhaustive but a system such as this may help readers put the ranking results in context. Thoughts and suggestions welcome.