The Employer Reputation component is unique amongst current international evaluations in taking into consideration the important component of employability. The majority of undergraduate students leave university in search of employment after their first degree, making the reputation of their university amongst employers a crucial consideration.


QS World University Rankings 10%
QS University Rankings: Latin America 20%
QS University Rankings: Asia 10%
QS University Rankings: Arab Region 20%
QS University Rankings: BRICS 20%
QS University Rankings: EECA 20%
QS Graduate Employability Rankings 30%


A common approach to the evaluation of employability in domestic rankings is graduate employment rate, there are two reasons why this indicator does not work at an international level – the first is that this evaluation looks at the top universities in the world – all of whom have very high employment rates – so it doesn’t provide very much discernment. The second is that, since we are looking at different countries, the results would react to local economic conditions and not necessarily just the quality of the institution. So, instead, we survey employers to ask their opinion on the quality of graduates.

Source of Respondents

The results are based on the responses to a survey distributed worldwide to employers from a number of different sources:

Previous Respondents

QS has been conducting this work since 2004 – all previous respondents to our survey are invited to respond again to provide us with an updated viewpoint on the quality of universities in their broad field.

QS Databases

In twenty years of operation QS has developed an extensive database of employers in key markets worldwide.

QS Partners

QS has an extensive network of partners including international media organisations and job portals, number of whom support our employer research by distributing survey invitations

Institution Supplied Lists

Since 2007, institutions have been invited to submit lists of employers for us to invite to participate in the Employer Survey.

In 2010, this invitation was extended to academics for them to take part in the Academic Survey which feeds into the Academic Reputation indicator.

Since employers are encouraged to list a number of institutions, the risk of bias towards the submitting institution is minimal, nonetheless submissions are screened and sampling applied where any institution submits more than 400 records.

The Survey

The QS Employer Survey has been running since 1990 and contributes to a number of key research initiatives operated by the QS Intelligence Unit including the QS TopMBA Salary & Recruitment Trends Report and the TopMBA Global 200 Business Schools. Like the academic survey the questionnaire is adaptive responding to the early questions to take respondents through the MBA, Masters or First Degree tracks as appropriate.

The key sections for the Rankings work as follows:

Response Processing

The work is not done once the survey is designed and delivered. Once the responses are received a number of steps are taken to ensure the validity of the sample.

Five Year Aggregation

To boost the size and stability of the sample, QS combines responses from the last five years, where any respondent has responded more than once in the five year period, previous responses are discarded in favour of the latest numbers.

The survey samples contributing to this work have been growing substantially over the lifetime of the project, resulting in inherently more robust reputation measures. The decision has been taken to extend the window for both reputation measures to five years instead of three years previously, with responses from the earliest two years carrying a relative weight of 25% and 50% respectively.

Junk Filtering

Any online survey will receive a volume of test or speculative responses. QS runs an extensive filtering process to identify and discard responses of this nature.

Anomaly Testing

It is well documented on the basis of other high-profile surveys in higher education that universities are not above attempting to get respondents to answer in a certain fashion. QS run a number of processes to screen for any manipulation of survey responses. If evidence is found to suggest any institution has attempted to overtly influence their performance, any responses acquired through sources 4 and 5 (above) are discarded.

Results Analysis

Once the responses have all been processed, the fun really begins and it works as follows for each of our five subject areas:

  1. Devise weightings based on the regions with which respondents consider themselves familiar – weightings are (now) based only on completed responses for the given question. This is slightly complicated by the fact that respondents are able to relate to more than one region.
  2. Derive a weighted count of international respondents in favour of each institution ensuring any self-references are excluded.
  3. Derive a count of domestic respondents in favour of each institution adjusted against the number of institutions available for selection in that country and the total response from that country ensuring any self-references are excluded.
  4. Apply a straight scaling to each of these to achieve a score out of 100.
  5. Combine the two scores with a weighting 70% international, 30% domestic – these numbers were based on analysis of responses received before we separated the domestic and international responses three years ago, but a low weighting for domestic also reflects the fact that this is a world university ranking. We use 85:15 for the academic review.
  6. Square root the result – we do this to draw in the outliers but to a lesser degree than other methods might achieve – our intention is that excellence in one of our five areas should have an influence, but not too much of influence.
  7. Scale the rooted score to present a score out of 100 for the given faculty area.