QSGER

QS publishes the first QS Graduate Employability Rankings!

The pilot edition of the Rankings applies QS’s new innovative approach, intending to take the discussions on employability rankings to the next level. Stanford leads this first edition; more than 20 new institutions place in the top 50.

Employability has been a hot topic for the Higher Education industry for years. With far easier access to a far broader selection of universities, it became an even more relevant aspect of students’ decision making. QS has been measuring employability in all of its rankings, with our Employer Reputation Survey running for over 20 years. But given the public’s special interest in this topic, it was time to expand the analysis, step out of the comfort zone, and create a new, specific ranking.

The primary aim of the QS Graduate Employability Rankings is to help students make informed choices for their educational futures based specifically on the ability of their chosen university to help them succeed in the employment market. Thorough research conducted over the course of 13 months saw consultation with, and input from, academics, university representatives, companies, students and alumni. This year’s experimental methodology was extensively refined throughout the year, and we are delighted to have introduced – for the first time ever in our rankings – unique metrics such as graduate employment rate and university partnerships with employers.

Read more

Overview

HEW Newsletter – Rankings Results 2015/16: An Overview

Despite the improved methodology described elsewhere in this issue of Higher Education World, the 2015/16 QS World University Ranking agree with last year’s on one thing: the Massachusetts Institute of Technology is the world’s top university. It has near-perfect scores on five of our six measures, and comes 62nd in the world on the other, its percentage of international students.

The stability of these rankings is also evident from the fact that the same institutions fill the top eight places in the Ranking as last year, although MIT is the only one in the same position. The most spectacular move affects Imperial College, London. It is down from second to eighth place, largely because of a 59-place fall in its citation per faculty member count. This is likely to be due mainly to the reduced emphasis that we now place upon excellence in biomedicine.

Read more

Methodology

Methodology refinements explanations

The methodology used to create the World University Rankings uses six robust measures which encapsulate the principal activities of global higher education. These measures are unchanged for the new 2015/16 Rankings. But as we explain here, the use we make of the data we collect has been improved markedly this year.

The first two of these measures involve asking informed people to identify the high points of the world university system. We do this by means of two annual surveys, one of active academics around the world, and one of recruiters. The academics are asked what their subject is and where the top 30 universities are in that field, although they tend to vote for a median of about 20. They cannot vote for their own institution. The employers are asked to name the subject or subjects in which they recruit graduates, and where they like to recruit them. These two measures account for 40 per cent and 10 per cent respectively of each institution’s possible score in this ranking.

Read more

QS_GER

New Rankings: QS Graduate Employability Rankings

The QS Intelligence Unit is proud to announce the future release of a new ranking focusing on employability outcomes for the graduates.

Leonardo Silveira, in charge of the project in London, told Tamara Sredojevic about the QS Graduate Employability Rankings:

What is the QS Graduate Employability Rankings?

The QS Graduate Employability Rankings comes from an extensive research project which has been running since October 2014. This project has aimed to design a new approach and methodology on employability in university rankings.

Following the research project, we are going to launch a first edition this November at the 11th QS-APPLE in Melbourne. As a pilot initiative, this new rankings will not at first alter the other QS University Rankings results.

What initiated the QS Graduate Employability Rankings?

So far, employability has been approached in the most prominent rankings solely by using employer reputation data. But it has also always been one of the main differentiators of the QS University Rankings.

Thereby, after dealing with this subject for many years, we realized there was a huge demand both from students and universities to get in-depth information on employability outcomes after graduation. This is why we decided to create a whole new ranking dedicated to employability. Read more

Changes ahead

Potential refinements in the QS World University Rankings 2015

Anyone who has seen me present will know that one of my most frequently used quotes is from the US statistician, George Box, said, “Essentially all models are wrong, but some are useful”. Rankings are controversial as much because they are imperfect, incomplete as anything else. Were there a perfect answer, and had someone found it, there would be no space for debate, discussion and disagreement.

The QS World University Rankings were one of the first, and remain one of the most popular, international rankings of universities. Part of this popularity has been in their simplicity and part in their consistency – six weighted indicators drawn together to present a simple table representing a global hierarchy of world universities.

Despite the basic framework remaining the same since 2005, QS has not been afraid to listen and make refinements. Switching to Elsevier’s Scopus database in 2007 was one such change. One of the well-known challenges in developing metrics from a bibliometric database like Scopus is taking into account the different patterns of publication and citation across discipline areas. Various efforts have been made to address this problem, perhaps with the Leiden Ranking being the leading protagonist. Read more

too-much-data

QS World University Rankings by Subject 2015 – challenges and developments

too-much-dataFrom a certain perspective, the work we do at a discipline level ought to be easy. After all, we don’t seek data directly from institutions to compile our rankings by subject which removes a major data collection and validation overhead. However, the scale of the output, in our terms is vast. Our main ranking aggregates performance in 6 indicators for just over 800 institutions and thus comprises around 5,000 individual processed data points; by contrast our rankings by subject use up to four indicators in 36 subjects for up to 400 published results. All in all the full analysis involves well over 40,000 processed data points.

Picking out trends, calibrating the approach, and identifying issues is a major effort. An effort which, I must confess, we underestimated in 2015.

In the coming days we will be releasing fact file information for the new version of the results prior to publication on April 29, and we expect to be similarly beset by questions as to how the results have been formed, what’s changed since the previous fact files we distributed, what can be inferred based on year on year performance and so forth. We’re aiming to give ourselves a little more time to get back to institutions with answers to their specific questions, but the most frequently asked questions are likely to be, what has changed since the previous version?

A substantial majority of institutions have been remarkably constructive and supportive despite previous results, in some cases, appearing to be dramatic downward departure from the previous year. The feedback has been precise, intelligent and constructive with many very specific observations which have been invaluable in our process rebuild. The international forum we ran in Sydney last month, was one of the most engaging events I have had the pleasure to attend. I personally experienced a surprising degree of empathy. There seemed to be a genuine understanding of the fact that this is and has been pioneering work, that it is deeply complex. It also provided us with an invaluable opportunity to listen to genuine experts in their field about what we are doing and how it could be improved – above and beyond any observed concerns about this edition.

We are committed to maintaining an active dialogue with as many stakeholders as possible and deeply appreciate the volume and nature of feedback we have received around this. We have listened, and we have taken the opportunity not only to identify and address some issues with this year’s edition but also to introduce some further refinements based on feedback, which I feel genuinely improves the work.

Our advisory board have also been supportive of the refinements.

The five key changes since the previously distributed, but unpublished, version, have been:

  1. The reintroduction of a regional weighting component in our survey analysis which had been inadvertently omitted
  2. The refinement of our analysis of the Scopus bibliometric database to address an issue where, in some instances, we had been counting articles only in the first subject to which they were categorized
  3. The adjustment of weightings in a further six subjects – making a total of nine subjects with modified weightings in 2015 – typically in favour of the citations and H measures – these changes are supported by the higher volumes of content from Scopus we have been able to retrieve in 2015
  4. The reinstatement of a paper threshold of 10 papers for English, and elevation of paper thresholds in Politics and History reflecting the higher volumes of research we are now taking into account
  5. The extension of our academic and employer survey samples to five years, with the earlier years weighted at 25% and 50% respectively. This stabilizes some of the subjects with lower levels of response and increases our total survey samples for this exercise to 85,062 academics and 41,910 employers

Once the fact files are distributed we will make ourselves available to answer specific enquiries and are currently in the process of scheduling some dedicated webinars to explain the developments in more detail – these will be announced soon. We have already made some changes to our methodology pages and updated response levels, weightings and paper thresholds as well as publishing our map of the ASJC codes used to allocate Scopus content to subjects. Read more here.

Best_Student_Cities_Launch

Chinese cities are among world’s best student cities

London, 20th November, 2013: The interesting thing about the 2nd QS Best Student Cities, for me, someone who was “Made in China” is that: Chinese cities are also named as two of the top 50 cities in the world for students.

 The results, released today, see Hong Kong is ranked 7th among all Chinese cities, and the second highest-ranked Asian city. Beijing is named as  Mainland China’s top-ranked city at 18th; while Shanghai ranks at 35th.

Read more

rangefinder

Changes to the ranges…

Another FAQ we’ve been receiving since the new Fact Files were distributed relates to the planned publishing ranges. In previous year’s we have published the indicator list down to 300, with any falling outside this range carrying a 301+. With more institutions than ever involved this year, we have taken the step to provide further detail and will this year publish results down to 400 with a 401+ thereafter. An institution ranked at 562 in 2012 will have been presented as 301+, but despite improving to 515 in 2013 their result will presented as 401+ which could be perceived as lower, but isn’t really.

Furthermore in the overall table, we previously published to 400, followed by ranges of 50 to 600 and 601+ catch all for the rest. This year we will still publish to 400, followed by ranges of 10 to 500, ranges of 50 to 700 and 701+ category.

Enjoy the extra detail.

Weightings

A new approach to faculty areas…

So our fact files for the QS World University Rankings® 2013/2014 went out last week. We’ve been trying to improve our distribution lists, but may not have got all the way there, so if you haven;t received your results for this year and you need them, please do get in touch.

The questions have started flowing and one of the most frequently asked has so far been to do with some substantial movements in our faculty area tables. The tables we produce in five broad faculty areas represent our first attempt to compare the quality of global universities on a subject basis and they have been published alongside the main results since 2004. From that time until 2012 they have been based purely on the results from an isolated subset of our Global Academic Survey. Since 2011, however, we have been producing rankings at a narrower discipline drawing on a broader range of indicators and these faculty area tables have become somewhat incongruous.

So, this year we’ve fixed that.

The faculty areas have now been updated to operate on the same principles and methodology as for the QS World University Rankings by Subject.

Like the narrower subject fields the indicators are weighted according based on their completeness and appropriateness to the area, with research metrics carrying less weight in Arts & Humanities:

Faculty Area Academic Employer Citations H
Arts & Humanities 60% 20% 10% 10%
Engineering & Technology 40% 30% 15% 15%
Life Sciences & Medicine 40% 10% 25% 25%
Natural Sciences 40% 20% 20% 20%
Social Sciences & Management 50% 30% 10% 10%