Once the data is collected and the weightings are decided upon, the next thing to do is to calculate standard scores for each column of data so that they are compatible with each other and allow us to combine the data reliably and apply the weightings fairly in the calculation of the overall score. Before 2007, the approach taken here was an over-simplistic one – find the top scoring institution, award them 100 notional points and scale the remaining entries proportionally to that top performer. This approach had some disadvantages:
- Anomalous application of weightings
- Lack of control for “outliers”
- The smallest of errors in the assessment of the top performing institution could have dramatic effects
From 2007, a more complicated, but widely used standardization or normalization method has been adopted involving z-scores. There are numerous online sources explaining how this works:
Wikipedia – http://en.wikipedia.org/wiki/Standard_score
In order to calculate z-scores the mean and the standard deviation of the sample are required. These are as shown in the table on the right.
Once the scores are calculated, their position on the normal curve is plotted resulting in their score for each indicator. The resulting scores are finally scaled between 1 and 100 for each indicator to result in a set of results compatible with those for the other indicators
|Citations per Faculty||37.55||29.70|
As the number of institutions in a given ranking grows, this can have an inconsistent effect on how the standardization methods apply to different indicators. Typically, new institutions added will tend to have weaker performances in reputation and research indicators, but may have strengths in faculty student ratio or the international measures. The effect of this has been to bring down the means used for indicators more closely correlated with overall performance at a faster rate than those with indicators less strongly correlated. From 2016, we have locked the mean and standard deviation used for the standardization calculations in the QS World University Rankings to the top 700 in any given indicator. An impact from this is to space out the institutions above the mean a little more and another is that it is typical for an institution in the same rank position to have a lower score than previously in any given indicator.