Use of “web metrics” to evaluate universities

There are a number of different takes on how to evaluate universities out there. The most inclusive of these seems to be to use web metrics, this is most notably applied through Webometrics Ranking Web of Universities and through the more one-dimensional 4icu.

Due to their inclusiveness, Webometrics looks at over 20,000 institutions, we frequently refer to these sources when we consider the validity of inclusion of new institutions in our lists and tables, amongst other considerations. I have some misgivings about these measures at the top of the table – asserting that MIT is stronger than Harvard because more rich files can be found on their website or because more sites link there this year seems a little academic. Also at a global level there is such an overwhelming emphasis on English – in Webometrics there are only 4 institutions in the top 50 from countries where the prevalent language is something other than English.

However, were we looking at a region where no country has an overt advantage towards English – such as Latin America, or Arabic countries the results may be more discerning. In this context, the volume of content in English could be seen as a measure of openness and international influence and since all subject institutions would have equal advantage this may have merit – particulalry in parts of the world where databases and institutional systems may not be as sophisticated as others and comprehensive data may not be available.

Next month, QS is due to publish its inaugural QS University Rankings – Latin America and we have been given unique access to Webometrics results for the region to examine whether or not they could work as an effective indicator in that context.

The current thinking is, that if we strip out the Google Scholar component, as it arguably overlaps with our analysis of Scopus, we may have an interesting input.

Comments and input welcome.

  • http://home.medewerker.uva.nl/m.p.wolsink/ Wolsink Maarten

    Good idea to remove Google scholar from the analysis. The serious part of their coverage is overlapping with Scopus indeed (peer reviewed publications) while the other part, also large, is suspect web content from the non-peer reviewed sector, including many research reports of contract research, mainly produced to satify the paying institutions. Such content has a low content of the factor of ‘speaking truth to power’. I usually don’t allow my students to include such references, why should univerisities be ranked high for using them?