Friday, December 02, 2011

European Universities and Rankings

Research Trends, the newsletter from Scopus,  reports on  a conference of European universities that discussed international rankings. The participants found positive aspects to rankings but also had criticisms:

Going through the comparison of the various methodologies, the report details what is actually measured, how the scores for indicators are measured, and how the final scores are calculated — and therefore what the results actually mean.
The first criticism of university rankings is that they tend to principally measure research activities and not teaching. Moreover, the ‘unintended consequences’ of the rankings are clear, with more and more institutions tending to modify their strategy in order to improve their position in the rankings instead of focusing on their main missions.
For some ranking systems, lack of transparency is a major concern, and the QS World University Ranking in particular was criticized for not being sufficiently transparent.
The report also reveals the subjectivity in the proxies chosen and in the weight attached to each, which leads to composite scores that reflect the ranking provider’s concept of quality (for example, it may be decided that a given indicator may count for 25% or 50% of overall assessment score, yet this choice reflects a subjective assessment of what is important for a high-quality institute). In addition, indicator scores are not absolute but relative measures, which can complicate comparisons of indicator scores. For example, if the indicator is number of students per faculty, what does a score of, say, 23 mean? That there are 23 students per faculty member? Or does it mean that this institute has 23% of the students per faculty compared with institutes with the highest number of students/faculty? Moreover, considering simple counts or relative values is not neutral. As an example, the Academic Ranking of World Universities ranking does not take into consideration the size of the institutions.

I am not sure these criticisms are entirely fair. It seems that the weighting of the various indicators in the Times Higher Education rankings emerged from a lot of to and fro-ing between various stakeholders and advisers. In the end, far too much weighting was given to citations but that is not quite the same as assigning arbitrary or subjective values.

The Shanghai rankings do have an indicator, productivity per capita , that takes  faculty size into account although it is only ten per cent of the total ranking. The problem here is that faculty in the humanities are counted but not their publications.

I am not sure why QS is being singled out with regard to transparency. The THE rankings are also, perhaps in a different way, quite opaque. Aggregate scores are given for teaching environment, research and international orientation without indicating the scores that make up these criteria.

So what is to be done?


The EUA report makes several recommendations for ranking-makers, including the need to mention what the ranking is for, and for whom it is intended. Among the suggestions to improve the rankings, the following received the greatest attention from the audience:
  1. Include non-journal publications properly, including books, which are especially important for social sciences and the arts and humanities;
  2. Address language issues (is an abstract available in English, as local language versions are often less visible?);
  3. Include more universities: currently the rankings assess only 1–3% of the 17,000 existing universities worldwide;
  4. Take into consideration the teaching mission with relevant indicators.

The first of these may become feasible now that Thomson Reuters has a book citation index. The second and third are uncontroversial. The fourth is very problematical in many ways.

The missing indicator here is student quality. To be very blunt, universities can educate and instruct students but they can do very little to make them brighter.  A big contribution to any university ranking would be a comparison of the relative cognitive ability of its students. That, however, is a goal that requires passing through many minefields.

4 comments:

jaylen watkins said...

European Universities ranking watch is very nice topic related to the methodology. Best ins and outs has been listed.

Job requirements

Anonymous said...

99% of that report is simply "cut & paste" of the methodological sections of each ranking. They do not make any real attempt for an in-depth analysis and the conclusions are mostly useless

Anonymous said...

Book Citation Index is by intended design going to be terribly biased. Another Jonathan Adams fiasco like the bibliometric section of THE ranking

Jeremy Alder said...

I'm curious why you think #4 is so problematical. Sure teachers can't make students brighter, but they can instruct them in more and less effective ways that should be (in principle) measurable, don't you think? This actually seems to me like a very fruitful avenue of further thought and research for those interested in improving the accuracy and usefulness of college rankings.