- About Us
Data Sources & Policies
- 2013/14 WUR
If you are looking for the latest results, please, visit www.topuniversities.com.
The QS World University Rankings® have been in existence since 2004, but since 2011 the study has been extended to encompass a range of popular individual subjects.
The majority of prospective international students begin their search knowing what they want to study before considering where, so understanding the comparative quality of institution by subject is fundamental to the QS mission to support students in their decision making. QS aims to add more depth, more detail and more subjects to this work year by year and anticipates that these tables will, in time, become more important than the overall result.
Any international ranking faces various challenges relating to the equal availability and applicability of data within different countries and university systems. Many indicators of university quality commonly used in domestic rankings, such as the average entry tariff of admitted students, are not yet applicable on an international level and are thus not included in any of these exercises. And in areas where universities themselves can provide data, the efficiency with which it is collected varies by region. While the depth of data available from the UK, Australia and the US may be exemplary it is yet to be matched by that in India, Greece or Brazil, for example.
These challenges become more pronounced when the focus of a ranking is narrowed to a particular aspect of university performance. While it may be reasonable to expect a university to have a decent understanding of its average faculty-student ratio, to break that down by faculty or department is difficult in even the most advanced cultures of data provision.
For this reason the methodology for QS World University Rankings by Subject has been narrowed to include only those indicators that bypass the direct involvement of institutions and can reliably be stratified by subject discipline. This page outlines the QS approach for doing so, and how it has been used to produce the new QS World University Rankings® by Subject.
Disclaimer – QS World University Rankings® by Subject
The chart below shows the level of survey responses from employers and academics contributing to the 2013 QS World University Rankings® by Subject
This chart lists the subjects we are working with as identified based on strength of response to the academic and employer surveys. It shows the number of paper affiliations indexed – which serves as a proxy for the scale of global research in the discipline – a total of all of the distinct paper affiliations in the discipline that we have been able to attribute to one of the 1,000+ universities that we have mapped into the Scopus database. The resulting paper threshold for each discipline is also depicted, representing the minimum number of papers an institution must have published in the last five years in order to qualify for our tables in that subject.
Weightings in the QS World University Rankings® by Subject are set variably depending on the pertinence of the indicator and the validity of the data. As of 2013, the weightings assigned will be as follows.
QS has three extensive datasets that enable us to drill down by subject area: our ACADEMIC and EMPLOYER reputation surveys, and the Scopus data we use for our CITATIONS per Faculty indicator in the global rankings. These have been combined to produce our subject results.
Obviously there are innumerable subject disciplines and sub disciplines. Through analysis of academic survey results over a protracted period and publication data from Sciverse Scopus, QS Intelligence Unit has identified 52 subject areas which may, at some stage in the next few years, reach the necessary data levels to facilitate a ranking. These are listed in Figure 4 and have been selected due to their meeting all of the following criteria:
Inclusion of specialists
QS has ensured that surveys have included all key specialist institutions operating within the discipline, regardless of whether they may have been expected to feature in the overall QS World University Rankings®
Academic Response Level
Subject attracts sufficient academic responses.
Overall Appropriateness of Indicators
Indicators and approach prove appropriate and effective in highlighting excellence in the discipline.
In order to feature in any discipline table, an institution must meet three simple prerequisites:
Not all disciplines can be considered equal. Publication and citation rates are far higher in life sciences and natural sciences than in social sciences or arts & humanities and therefore there is more data. It would not make sense to place the same emphasis on citations in medicine and English language and literature.
Similarly the popularity of particular disciplines amongst employers varies greatly, and placing the same emphasis on employer opinion in economics and philosophy therefore makes little sense. Taking these factors into account leads to a variable approach to the weightings for the different subjects, which can be seen in Figure 1 above.
In principle, in the future additional indicators may be introduced that could contribute to as few as one single subject area.
Academic reputation has been the centrepiece of the QS World University Rankings® since their inception in 2004. In 2012 we drew on over 46,000 respondents to compile our results. The survey is structured in the following way:
Section 1: Personal Information
Respondents provide their name, contact details, job title and the institution where they are based
Section 2: Knowledge Specification
Respondents identify the countries, regions and faculty areas that they have most familiarity with and up to two narrower subject disciplines in which they consider themselves expert
Section 3: Top Universities
For EACH of the (up to five) faculty areas they identify, respondents are asked to list up to ten domestic and thirty international institutions that they consider excellent for research in the given area. They are not able to select their own institution.
Section 4: Additional Information
Additional questions relating to general feedback and recommendations.
A thorough breakdown of respondents by geography, discipline and seniority is available in the methodology section of our main rankings here
As part of QS Global Academic Survey, respondents are asked to identify universities they consider excellent within one of five areas:
The results of the academic reputation component of the new subject rankings have been produced by filtering responses according to the narrow area of expertise identified by respondents. While academics can select up to two narrow areas of expertise, greater emphasis is placed on respondents who have identified with only one.
The threshold for academic respondents that any discipline must reach for us to consider publication has been set at 150. As responses build over time, new subjects from the above list may qualify.
The number of academic respondents considered for each qualifying discipline can be seen, along with employer responses, in Figure 2 above. As with the overall tables, our analysis places an emphasis on international reputation over domestic. Domestic responses are individually weighted at half the influence of an international response. This is a global exercise and will recognize institutions that have an international influence in these disciplines. As in the main QS World University Rankings®, weightings are also applied to balance the representation by region.
NEW FOR 2012 – Direct Subject Responses
Until 2010, the survey could only infer specific opinion on subject strength by aggregating the broad faculty area opinions of academics from a specific discipline. From the 2011 survey additional questions have been asked to gather specific opinion in the respondent’s own narrow field of expertise. These responses are given a greater emphasis from 2012.
QS World University Rankings® are unique in incorporating employability as a key factor in the evaluation of international universities, and in 2012 drew on over 25,500 responses to compile the results for the overall rankings. The employer survey works on a similar basis to the academic one only without the channelling for different faculty areas. Employers are asked to identify up to ten domestic and thirty international institutions they consider excellent for the recruitment of graduates. They are also asked to identify from which disciplines they prefer to recruit. From examining where these two questions intersect we can infer a measure of excellence in a given discipline.
A full breakdown of respondents by geography and sector is available in the methodology section of our main rankings here
Of course, employability is a slightly wider concern than this alone would imply. Many students’ career paths are indirectly related to their degree discipline. Many engineers become accountants and few history students wind up pursuing careers closely related to their program. On this basis, employers citing a preference for hiring students from ‘any discipline’ or from broader category areas are also included in subject the scores, but at a considerably lower individual weighting. From 2012, a greater emphasis is placed on the opinions of the employers that are specifically interested in only the given discipline.
It is our view, based on focus groups and feedback from students, that employment prospects are a key consideration for prospective students when choosing a program and a university, regardless of whether or not they envisage a career directly linked to the discipline they choose to study.
Employers seeking graduates from any discipline are weighted at 0.1 and those from a parent category (i.e. social sciences) are weighted at 0.25 relative to the weight of a direct response for the subject area. Responses from employers exclusively targeting a specific subject carry a relative weighting of 2.
Figure 2 shows the total number of employers, alongside the academics contributing to our employer index in each of the corresponding disciplines. The similarities between the numbers recorded in each of the engineering sub-disciplines is down to the fact that employers were asked to comment on engineering in general rather than the specific sub-disciplines. A small number of respondents specified their preference through the ‘other’ option provided in the survey, leading to a slightly different total for mechanical engineering. The threshold for including the employer component in any discipline is 300.
As with the overall tables, our analysis places an emphasis on international reputation over domestic, with domestic responses carrying half the individual weighting of international responses. This is a global exercise and recognizes institutions that have an international influence in these disciplines. A weighting is also applied to balance representation by region.
In the overall QS World University Rankings® we use a measure of citations per faculty. This has some advantages in that it does a good job of taking into account the size of an institution, yet allows us to penetrate deeply into the global research landscape. Due to the impracticality of reliably gathering faculty numbers broken down by discipline, for the purposes of this exercise we have measured citations per paper. A minimum publication threshold has been set for each subject to avoid potential anomalies stemming from small numbers of highly cited papers.
Journals in Scopus are tagged with a number of ASJC (All Science Journal Classification) codes, which identify the principal foci of the journal in which they were published (multidisciplinary journals are excluded). When aggregated these totals and their associated citations provide an indicator of volume and quality of output within a given discipline.
One of the advantages of the “per faculty” measure used in the overall rankings is that a small number of papers, achieving a high level of citations, has limited impact due to the divisor. Conventionally in citations per paper analysis, a paper threshold is required to eliminate anomalies. Of course publication patterns are very different in different subjects and this needs to be taken into account both in terms of the thresholds that are used and the weights applied to the citations indicator.
Figure 3 above lists the subjects we will be working with as identified based on strength of response to the academic and employer surveys. It shows the number of paper affiliations indexed – which serves as a proxy for the scale of global research in the discipline – a total of all of the distinct paper affiliations in the discipline that we have been able to attribute to one of the 1,000+ universities that we have mapped into the Scopus database. The resulting paper threshold for each discipline is also shown, representing the minimum number of papers an institution must have published in the last five years in order to qualify for our tables in a given subject.
There are certain subjects in which academic publications are not a feasible or appropriate measure of academic output. These subjects have either zero or a low number of papers in Scopus, and are denoted in the above by a paper threshold of 0. Any discipline must have at least 6,000 papers identifiable in the table above for us to include the citations indicator in the table.
There are a few other clarifying points regarding our treatment of Scopus data; most, if not all of which also apply to our analysis for the forthcoming cycle of regional and global rankings:
The h-index is an index that attempts to measure both the productivity and impact of the published work of a scientist or scholar. The index is based on the set of the scientist’s most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a group of scientists, such as a department or university or country, as well as a scholarly journal. The index was suggested by Jorge E. Hirsch, a physicist at UCSD, as a tool for determining theoretical physicists’ relative quality and is sometimes called the Hirsch index or Hirsch number.
A relatively thorough definition and description of h-index, including advantages and drawbacks, can be found on Wikipedia.
Despite being built on the same underlying data as the citations measure, the H indicator returns some different results, these differences are central to the value of h-index. In a large institution producing a lot of research, a research group that is cutting edge can be lost in a citations per paper approach, whereas in h-index analysis, it is the unimportant research that gets overlooked. A small, focused institution is unlikely to compete with a world leading large institution, but can still hold their own. Another approach may have been to replace the citations measure altogether, but the citations measure provides a measure of consistency, rewarding institutions whose performance is solid across the discipline, regardless of whether they have stellar research groups in the mix too. On balance, advisors felt that both indices brought something of value to these observations.
Publication and citation patterns vary dramatically by discipline, which limits their usefulness in overall rankings and h-index is no different. A typical h-index for an academic in Physics will be far higher than that of someone in Sociology, for example. However, when working in a single discipline where differing characteristics by discipline are eliminated, they are more effective and bias is broadly eliminated.
The h analysis is still based on a dataset which can only be classified by discipline at a journal, rather than article, level. In order to balance for the effects of this and focus on specialists, two h-indices are calculated; one for all the papers that are attributable to the given subject (h1), and one to the papers that are only attributable to that subject (h2). These are aggregated with double weight given to h2. The results are then scaled and normalized using the same methods applied to the other indicators.
2011 was the first year in which an attempt was made to break our study down to a specific discipline level. While in general the results were welcomed around the world, many discipline-level experts came forward with specific feedback much of which has been folded back into our methodology. Many observations surrounded the fact that in some areas, institutions with well-known strengths in a given discipline were undervalued with respect to comprehensive institution with a strong overall reputation and research profile. This has led to a number of enhancements designed to better identify institutions with key strengths in a particular area, and to more effectively filter out the influence exerted by overall reputation on the discipline results. Some of these enhancements have been described in the previous sections (denoted by a tick) but one, which affects both the academic and employer survey, has not:
The above analysis returns a result for the specific discipline but due to the way the surveys work there remains a strong influence from overall reputation and strength in adjoining disciplines unless additional weightings are employed.
We also have the overall reputational results from the employer survey and in each of the faculty areas of the academic survey. This provides an opportunity to apply a further adjustment to the survey measures based on the divergence between performance in a specific discipline, and overall or faculty area performance. This means that the scores of institutions that fare better in the specific discipline than overall are given a proportional boost, while those that fare worse have those shortfalls proportionally amplified. The result is that the key strengths of institutions shine brighter and less credit is attributed to overall reputation and strength in adjoining disciplines.