Technical challenges with tracking publications and citations for certain institutions.

by Ben Sowter

 

Tracking all the papers and citations data we need from the Scopus database to fuel our evaluations is quite a challenge and our process has always resulted in some discrepancies between the results we are using and the results that you can actually retrieve from Scopus at given moment. Scopus is an ever-changing database, not only are Elsevier working very hard to add more journals, in more languages and backfilling, but they are alos workign hard to concolidate affiliations and make it easier to retrieve all the data for a given author or institution. The database is vast, however, and the variants are many – apparently MIT, for example at point in time has 1,741 name variants. Additionally, as time goes by, more papers get published and more citations get filed.

Our analysis is based on “custom data” exported from Scopus at a fixed point in time, defined within fixed limits. We use the last five complete years for both papers and citations – that is to say we take a count of all papers published in the five years leading up to December 31st of the previous year and the total of any citations received during the same period. By the time the Times Higher Education – QS World University Rankings are published in October there will 10 more months of papers and publications appearing in the online version Scopus.

The custom data for the forthcoming 2009 analysis amounts to 18Gb of raw XML data – along with this Elsevier provide an affiliation table. This table is an improving lens that we can use to identify the mappings required to retrieve the aggregate data we need. We search this affiliation table for strings that match the universities (or their alternate names) in our database which returns a list of 8 digit affiliate id numbers which we can then use to retrieve and aggregate data from the main data set. If key names are missing from the affiliation table it is very difficult to identify and content that may exist in the main dataset.

Since the publication of the QS.com Asian University Rankings a couple of institutions have come forward and expressed that to some degree or another, data is missing for their institution. This has been discovered thanks to our practice of sharing a “fact file” with institutions prior to publication. Each of them are now working with QS to ensure that any shortfall is rectified in the future.

In future we will be splitting our fact file distribution into two with one comeing out long in advance of publication and then a media briefing which will include the ranking results two days prior to the publication date.

QS.com Asian University Rankings: Beyond the obvious…

by Ben Sowter

 

I have just returned from a trip to South Korea and Japan where I was presenting the methodology and results of the QS.com Asian University Rankings (AUR) and speaking to a number of universities about the implications of the results in both general and specific terms. Inevitably, as with any ranking upon publication, some institutions are pleased and some are disappointed. The University of Hong Kong at number one seem to be very pleased and their vice-chancellor has been very hospitable and forthcoming, the University of Tokyo… not so much.

These results are not necessarily an omen for the next THE-QS World University Rankings (WUR), however, where Tokyo is reasonably likely once again to assert itself as number one amongst Asian institutions in the global context. There seems a little confusion about that… how can two evaluations from the same organisation yield different results? Well, it’s all about the context. Firstly the methodology for AUR is different from that of WUR, in the narrower context we have been able to gather more data – most notably in the area of exchange programs and we have altered the way we look at publications and citations with a view to being more generous to institutions not operating principally in English. Even without the methodological operations however, the results would not have been the same as WUR because the normalisation of each data point involves the mean and standard deviation of a wildly different dataset.

“Flagship” thinking seems to dominate the thinking of most people in the region – they look at the performance of the top institution in their country, and if it has dropped they assume that either the higher education system in their country is failing and requires reform, or more commonly that the ranking has no value and need not be considered any further.

Digging a little deeper into the performance of Japanese universities in AUR, however, reveals a healthy counterpoint to any observations about the performance of the University of Tokyo – Japanese universities as a cohort have done better in AUR than WUR with 10 top 20 institutions as opposed to only 6 in an Asian extraction from WUR – the implication is that the AUR methodology is indeed, as intended, more accepting of institutions with limited output and teaching in English than WUR. The trend for Japanese universities continues all the way through the results.

Number of Japanese institutions featured within various strata of the AUR and WUR

Number of Japanese institutions featured within various strata of the AUR and WUR

As the chart shows, performance of Japanese institutions collectively is considerably stronger in AUR than WUR. The same is true of institutions in South Korea and Hong Kong where institutions in Singapore, Mainland China and Taiwan fare less well. Read more

The Impact of Rankings

by Ben Sowter

 

Since long before the global university ranking movement emerged in 2003, academics, university leadership and government officials have debated the impact of university rankings. Marguerite Clarke, formerly of Flinders University in South Australia did some good work focused on the US News ranking, looking at matters such as access – this is summarised in an article she supplied for our website here.

The point is, that independently of anyone’s opinion on their individual characteristics, rankings are having an impact on higher education policy and decision making around the world and, according to a study recently completed by the Institute for Higher Education Policy (IHEP), much of this impact is (cautiously) positive.

Their press release provides an independent viewpoint, so I have included the full text:

GLOBAL RANKING SYSTEMS MAY DRIVE NEW DECISION MAKING AT U.S. HIGHER EDUCATION INSTITUTIONS
Examination of Four Countries with High-Profile College Rankings Suggests Institutional Practices May Improve Through New Approaches

Washington, D.C., May 21, 2009—The ranking of higher education institutions is steadily growing into a global phenomenon—currently more than 40 countries have ranking systems, in addition to several international rankings that compare institutions across national lines. With this proliferation, many campus stakeholders question the goals, uses, and outcomes of these systems. However, it is also important to understand the ways institutions are using rankings to inform their work and to consider how the institutional use of ranking systems in other countries can inform practices in the United States.

Based on interviews with key institutional stakeholders in four countries—Australia, Canada, Germany, and Japan— the Institute for Higher Education Policy (IHEP) examines ranking systems and their impact on the policies and practices at colleges and universities. In its new issue brief, Impact of College Rankings on Institutional Decision Making: Four Country Case Studies, IHEP explores the nuances and unique approaches in which rankings could prompt institutions to work in innovative ways.

“At a time when institutional accountability, assessment, and data-driven decision making pressures are at a high both in the United States and abroad, this report provides a useful framework for considering how rankings add to and distract from institutional improvement efforts,” said IHEP President Michelle Asha Cooper, Ph.D. “It is our hope that institutions will consider the strategies used in other countries to reexamine the positive and negative ways rankings are influencing their own work.”

Although valid criticisms of rankings were offered in this issue brief, it highlighted a number of findings on how institutions have explored new ways of doing work. In many cases, rankings can trigger a shift of institutional resources for such productive uses as faculty profile, research collection and analysis, and student learning outcomes. These changes can also be integrated into broader strategic planning initiatives to change national and international higher education policy contexts. Read more

World University Classifications?

by Ben Sowter

 

I imagine this is too simple an idea to be particularly practical but would welcome feedback either way.

The THE-QS World University Rankings, amongst others, are frequently criticized in all sorts of ways, some fair and some not.

One of the most common observations is the failure of most aggregate ranking systems, whether international or domestic, to acknowledge the different missions and typologies of institutions.

In the case of the THE-QS exercise, large institutions are likely to be advantaged in terms or recognition whilst smaller ones may have greater ability to perform in some of the ratio based indicators.

In the US we frequently refer to the Carnegie classification system to better understand the nature of institutions that are featured in the rankings. What if we were to apply a similar, albeit simpler, concept to universities at a world level and include a classification alongside all ranking results.

Classifications might include:

Type A: Large, fully comprehensive

More than 10,000 students. Offer programs in all 5 of our broad faculty areas. Has a medical school.

(i) High Research – Over 5,000 papers in 5 year Scopus extract.
(ii) Moderate Research – 1,000-4,999 papers in 5 yyear Scopus extract
(iii) Low Research – 100-999 papers in 5 year Scopus extract
(iv) Negligible Research – Less than 100 papers in 5 year Scopus extract

Type B: Large, comprehensive

More than 10,000 students, operates programs in ALL of our 5 broad faculty areas. Has no medical school.

(i-iv) Reduced thresholds

Type C: Large, focused

More than 10,000 students. Operates programs in 3 or 4 of our broad faculty areas.

(i-iv) Reduced Thresholds

Type D: Large, specialist

More than 10,000 students. Operates programs in 1 or 2 of our broad faculty areas

(i-iv) Research thresholds set against mean or median for stated specialist faculty areas

Types E-H: same as above but for medium sized institutions. 4,000-10,000 students

Types H-K: Same as above but for small institutions – less than 4,000 students

A (u) or (p) could be added to denote institutions that only offer programs at either undergraduate or postgraduate level.

This is unlikely to, yet, be exhaustive but a system such as this may help readers put the ranking results in context. Thoughts and suggestions welcome.

QS.com Asian University Rankings: The Top 100

The results of the QS.com Asian University Rankings are finally here. You can view the full results and more detail on the methodology on http://www.topuniversities.com/university-rankings/asian-university-rankings but here are the Top 100 to get you started…

2009 rank School Name Country
Source: QS Quacquarelli Symonds (www.qs.com)
Copyright © 2004-2009 QS Quacquarelli Symonds Ltd.Click here for copyright and limitations on use.
1 University of HONG KONG Hong Kong
2 The CHINESE University of Hong Kong Hong Kong
3 University of TOKYO Japan
4 HONG KONG University of Science and Tech… Hong Kong
5 KYOTO University Japan
6 OSAKA University Japan
7 KAIST – Korea Advanced Institute of Scie… Korea, South
8 SEOUL National University Korea, South
9 TOKYO Institute of Technology Japan
10= National University of Singapore (NUS) Singapore
10= PEKING University China
12 NAGOYA University Japan
13 TOHOKU University Japan
14 Nanyang Technological University (NTU) Singapore
15= KYUSHU University Japan
15= TSINGHUA University China
17 Pohang University of Science and Technol… Korea, South
18 CITY University of Hong Kong Hong Kong
19 University of TSUKUBA Japan
20= HOKKAIDO University Japan
20= KEIO University Japan
22 National TAIWAN University Taiwan
23 KOBE University Japan
24 University of Science and Technology of … China
25 YONSEI University Korea, South
26 FUDAN University China
27 NANJING University China
28 HIROSHIMA University Japan
29 SHANGHAI JIAO TONG University China
30= Indian Institute of Technology Bombay (I… India
30= MAHIDOL University Thailand
32 ZHEJIANG University China
33 KOREA University Korea, South
34 Indian Institute of Technology Kanpur (I… India
35 CHULALONGKORN University Thailand
36 Indian Institute of Technology Delhi (II… India
37 WASEDA University Japan
38 The HONG KONG Polytechnic University Hong Kong
39 Universiti Malaya (UM) Malaysia
40 National TSING HUA University Taiwan
41 CHIBA University Japan
42 EWHA WOMANS University Korea, South
43 National CHENG KUNG University Taiwan
44 SUNGKYUNKWAN University Korea, South
45 NAGASAKI University Japan
46 HANYANG University Korea, South
47 National YANG MING University Taiwan
48 TOKYO Metropolitan University Japan
49 Indian Institute of Technology Madras (I… India
50 University of INDONESIA Indonesia
51 Universiti Kebangsaan Malaysia (UKM) Malaysia
52 SHOWA University Japan
53 KUMAMOTO University Japan
54 YOKOHAMA NATIONAL University Japan
55 YOKOHAMA CITY University Japan
56 OKAYAMA University Japan
57 KYUNG HEE University Korea, South
58 PUSAN National University Korea, South
59 GIFU University Japan
60 University of DELHI India
61 SOGANG University Korea, South
62 KANAZAWA University Japan
63= Indian Institute of Technology Roorkee (… India
63= OSAKA CITY University Japan
63= Universitas GADJAH MADA Indonesia
63= University of the PHILIPPINES Philippines
67 TOKYO University of Science (TUS) Japan
68 GUNMA University Japan
69 Universiti Sains Malaysia (USM) Malaysia
70 TIANJIN University China
71 National SUN YAT-SEN University Taiwan
72 National TAIWAN University of Science an… Taiwan
73 Hong Kong BAPTIST University Hong Kong
74 National CHIAO TUNG University Taiwan
75 XI’AN JIAOTONG University China
76 DE LA SALLE University Philippines
77 National CENTRAL University Taiwan
78 NIIGATA University Japan
79 OCHANOMIZU University Japan
80 BANDUNG Institute of Technology (ITB) Indonesia
81 CHIANG MAI University Thailand
82= KYUNGPOOK National University Korea, South
82= Universiti Teknologi Malaysia (UTM) Malaysia
84 Ateneo de MANILA University Philippines
85 THAMMASAT University Thailand
86 TOKAI University Japan
87 MIE University Japan
88 CHONNAM National University Korea, South
89 KAGOSHIMA University Japan
90 Universiti Putra Malaysia (UPM) Malaysia
91 CHANG GUNG University Taiwan
92 INHA University Korea, South
93 TOKYO University of Agriculture and Tech… Japan
94 TONGJI University China
95 SOUTHEAST University China
96 HITOTSUBASHI University Japan
97 CHONBUK National University Korea, South
98 AJOU University Korea, South
99 CHUNGNAM National University Korea, South
100 University of PUNE India

QS.com Asian University Rankings due to provide insight on a larger number of indicators

by Ben Sowter

 

It has been encouraging to see traffic on this fledgling blog spike today in anticipation of the QS.com Asian University Rankings due for publication tomorrow. It has been a very busy time responding to individual institutions and preparing our press campaign. The methodology is somehat different from the THE-QS World University Rankings, with a smaller number of countries we have been able to gather adequate data on a couple of additional indicators – the internationalisation area now features inbound and outbound exchange numbers; whilst the citations per faculty indicator has been split out into papers per faculty (productivity) and citations per paper (quality).

Additionally, the regional exercises emphasises the performance differences between institutions in the region – particularly in research measures where the presence of US institutions significantly compresses the scale.

All this means there may be a few small surprises tomorrow when the results are published. Results and more detail on the methodology will emerge initially through Chosun Ilbo (www.chosun.com), our partner in South Korea and will follow at 6.00AM GMT on our website – www.topuniversities.com.

I will try and find time later in the week to put together a more complete post looking at some of the results and some of the interesting contrasts between the results of this exercise and those of the world rankings. I also look forward to reading and responding to any comments about the methodology or results – we’re always interested in feedback and providing a balanced view.

The geography of rankings

Some helpful fellow in Germany has plotted the location of the Top 100 universities in both the THE – QS World University Rankings and the Shanghai Jiao Tong exercise on a friendly, interactive Google map to be found here www.university-rankings.net

sjtu_map

Geographic distribution of top 100 universities in Shanghai Jiao Tong's Academic Ranking of World Universities

Geographic distribution of top 100 universities in THE - QS World University Rankings

Geographic distribution of top 100 universities in THE - QS World University Rankings

There are some interesting contrasts between the two maps even when only looking at the Top 100. THE-QS includes institutions in China and Singapore, is more generous towards Australasia, and whilst the picture looks similar in coastal US states, SJTU shows greater favour towards institutions in the Mid-West. Sadly the exercise is currently limited to the top 100 – it would be interesting to see the greater contrast further down the lists and, perhaps, to see how these compare with the results of other ranking exercises, both international and domestic.

Financial factors can be a dangerous measure

by Ben Sowter

 

Many people have frequently suggested that financial indicators ought to be considered in rankings of institutions and the logic is clear – potential things that could be considered are:

  • Average Fees
  • Return on Investment
  • Average Salary upon Graduation
  • Investment per Student
  • Library Spending
  • Total Research Funding
  • Share of Government Research Funding

Whilst this might make a lot of sense in an individual domestic context that may not necessarily be the case when the ranking exercise in question has a broader scope. The fundamental objective of almost any ranking (and there appear to be some exceptions) is to evaluate the performance of the institution. Sadly, most if not all financial indicators are subject to external economic influences which are very difficult to adjust for. This has drawn the conclusion from the THE-QS team that financial indicators are unlikely to ever be practical for the global exercise.

Business schools are amongst the most familiar with rankings – the Financial Times and Business Week rankings, amongst others, have been running for some time and are well established. In contrast to many domestic exercises the FT is very open with its historical results. The chart below shows the number of UK business schools appearing in the top 50 against the Interbank Dollar-Sterling exchange rate on January 1st of each year. Whilst there isn’t a perfect match the trend certainly seems to be that the strength of UK schools is strongly linked to the strength of the pound.

Comparison of UK business school performance in FT rankings against exchange rates

Comparison of UK business school performance in FT rankings against January exchange rates

Sadly, no business school has yet been able to establish a great deal of influence over comparative currency strength or the global economy as a whole. So this effect is completely outside their influence and invalidates the ranking as a whole as a serious performance measure.

The steep decline in the strength of the pound is likely to make 2010 a very difficult year for UK universities in ranking terms and any investment they make in improving teaching or research is unlikely to significantly assist their situation. Graduates from UK schools – although highly international – are likely to accept lower salaries on graduation even outside the UK with exchange rates as they stand and UK salaries are worth less.

No ranking is perfect – even those most established and accepted. Any set of results need to be put in context.

University Rankings: There can be no “right answer”.

by Ben Sowter

 

Part of the excitement of university and business school rankings is that there is no “ultimate solution”. At a symposium at Griffith University in 2007, Nian Cai Liu – who leads Shanghai Jiao Tong’s Academic Ranking of World Universities (www.arwu.org) was posed the question, “Many rankings use surveys as a component of their methodology, why do you choose not to?”. His matter of fact response was “I’m an Engineer”.

But his team’s selection of Nobel Prizes or Highly Cited Authors as indicators are not intrinsically less questionable as measures of university quality in the round – which regardless of stated purpose, the results are often being used for. Three days ago at a comparable event in Madrid, organised by Isidro Aguillo and his Cybermetrics team, similar aspersions were cast on surveys in contrast with more “statistically robust” measures such as link analysis – as used for the Webometrics exercise (www.webometrics.info). The supposition was made that simply because the THE-QS exercise is the most “geographically generous” of the four global aggregate rankings, it must be some how wrong. And that maybe survey bias is to blame for that.

Well I have news for you. THEY ARE ALL WRONG.

The higher profile of universities in China and Hong Kong in THE-QS was cited as evidence for survey bias – whilst it is well-documented on our website that the survey response from China, in particular, is disproportinately low. We are working to remedy this, but it is clearly unlikely to strongly favour Chinese institutions – these universities are perfoming well due to the profile they are building outside China.

Despite the fact that these surveys are currently only conducted in English and Spanish, the survey compenents offer a much reduced language bias than seems to be implied from Nobel Prizes, citations (in any index), cybermetrics, publications in Nature & Science, highly cited authors and many other factors selected by other international evaluations. Respondents, even those responding in English, are cogniscent of the performance of other institutions in their own language – and this seems to be coming through in the results.

Sure, there are biases in the surveys, and the system overall – some are partially corrected for and some are not, but these exist in every other system too even if they may not be quite as immediately evident.

The THE-QS work is presented prolifically around the world – by myself, my colleagues, the THE and third-parties. We present it alongside the other exercises and are always careful to acknowledge that each has its value and each, including our own, has its pitfalls. NONE should be taken too seriously, and to date ALL bear some interest if viewed objectively.

The most entertaining input I have received since conducting this work came from an academic that systematically discredited all of the indicators we have been using but then concluded that, overall, he “liked what we were doing”. It is possible to do that with any of the systems out there – domestic, regional or global. The most savvy universities are using the rankings phenomenon to catalyze and establish keener performance evaluation internally at a faculty, department and individual staff member level. Driving it down to this level can help build actionable metrics as opposed to abstract statisitics and this can lead to a university being able to revolutionise their performance in education and research, and in time, as a side-effect rather than an objective, improve their performance in rankings.