Number crunching

We have had a number of requests recently, enquiring how we get from A to B. THese fall into 2 categories:


University A only ranks 270 in the 2011/2012 university rankings and the faculty rankings rank her 48 in Social Sciences, 51 in Arts & Humanities, 59 in Life Sciences, 89 in Natural Sciences, 115 in Engineering & IT. Shouldn’t be better rank in the global 2011/2012 university rankings? Because we also can observe that some universities such as University B ranks 96 in the 2011/2012 university rankings and their faculty areas only rank 123 in Social Sciences, 66 in Arts & Humanities, 150 in Life Sciences, 210 in Natural Sciences, 244 in Engineering & IT. So, i can suppose the University A should be better ranked.

This case is simple. The faculty area rankings we produce are based on academic reputation only. When aggregated they contribute only 40% of the overall score with another five indicators making up the difference. In the above case University A will undoubtedly rank better for Academic Reputation but will be let down by aspects such as Faculty Student, Employer Reputation and Citations per Faculty which are only considered at an overall level


Some institutions have questioned why their overall rank does not necessarily fall intuitively amongst the range of ranks for the indicators. This again, is relatively simple, the indicator ranks are not perfectly correlated, leading to the situation where institutions that are comparatively consistent across the board are likely to do better than institutions that might be dynamite in some areas and weak in others. Consider the following example:

Institution A is ranked 250th in all indicators and scores an overall rank of 225.

Institution B is rankined 230th in all indicators, except faculty student ratio where it ranks 610th, it has an overall rank of 270th.

This is a hypothetical simplified example to demonstrate the point. The individual indicator ranks for Institution A have been displaced by Institution B in all indicators but one and as a result of combining this with additional comparable examples that affect all indicators evenly (in this model) the rank of Institution A seems higher than intuition might expect.

Ultimately overall ranks are based on aggregated scores and the differeng characteristics of the indicators can easily produce circumstances where the overall rank may appear to not be an accurate aggregation of the separate indicator ranks.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *