Saturday, June 19, 2010

A Bit More About the THE Survey

Thomson Reuters have released a bit more information about the reputational survey they recently conducted for the 2010 Times Higher Education World University Rankings.

They managed to get 13,388 responses. This is quite a lot less than the original target of 25,000 although it is higher than the 9,000 plus respondents to the 2009 THE-QS rankings. This means that QS, who are preparing their own rankings, now have an opportunity to boost the numbers of their respondents by using the usual devices -- reminders, extended deadlines, a chance to win an iPad instead of a Blackberry and so on . Thomson Reuters may have made a mistake by closing their survey so early.

Still, numbers are not everything. Thomson Reuters can claim that their survey, which uses the ISI database of authors published in reputable academic journals, targets people who know something about research. The QS survey, on the other hand, consists merely of those who have managed to get on the mailing list of World Scientific.

Thomson Reuters have also provided some information about the regional and disciplinary distributions of their respondents. The largest group is from the Americas. While most disciplinary clusters are well represented, there is a very small number from the arts and humanities. Respondents spend slightly more than half their time doing research and slightly less than a third teaching.

Is this really enough? It would be interesting to know how many forms were sent out and what the response rate was. Also, how far back in time did Thomson Reuters go in collecting respondents? If they went back five or ten years many respondents might have retired or lost interest in research since publishing.

It also would be helpful if more information were given about the geographical distribution of the survey. One notable absurdity of the THE-QS surveys of 2004-2009 was the marked bias in favor of particular countries – more respondents from Indonesia than from Germany, more from the UK plus Australia than from the US, more from Ireland (just the Republic?) than from Russia. Thomson Reuters have probably overcome these biases but have new ones emerged? Has there been an adequate response from Southeast Asia outside Singapore? Have Russia and Central Asia and the Middle East outside Israel been affected by the omission of Russian and Arabic from the list of languages in which the forms can be completed?

It is good that Thomson Reuters have released some information but if they are to fulfill their promise of greater transparency more is needed.

Friday, June 04, 2010

The New THE Ranking Methodology

Times Higher Education has given some information about the proposed structure and methodology of their forthcoming World University Rankings. At first sight, the new rankings look as thought they might be an improvement on the THE-QS rankings of 2004-2009 but there are still unanswered questions and it is possible that the new rankings might have some defects of their own.

The proposed methodology will feature 13 indicators, possibly rising to 16 next year. Here we have the first problem. Frequent changes of method bedevilled the THE-QS rankings, producing, along with a series of errors, implausible rises and falls. If the new rankings are going to see further changes not just in the fine detail of data collection but in the actual indicators themselves then we going to see more spurious celebration or lamentation as universities bounce up down the rankings. Still, if THE are going to standardise the indicator scores from the beginning it is unlikely that their rankings will ever be as interesting as the THE-QS used to be.

The largest component of the proposed ranking is "research indicators" which accounts for 55% of the weighting. These include academic papers, citation impact, research income, research income from public sources and industry and a reputational survey of research.

Another category is "institutional indicators", which together get 25%: number of undergraduate entrants, number of PhDs awarded, a reputation survey of teaching and institutional income.

Ten per cent will go to "international diversity", divided equally, as in the THE-QS rankings, into international students and international faculty.

Another ten per cent goes to economic activity/ innovation. At the moment this consists entirely of research income from industry although there are apparently plans to add two other measures next year.

There are some obvious rough edges in the proposals. The economic activity/innovation income consists entirely of research income from industry but research income from public sources and industry appears under research indicators. In the institutional indicators, universities will get credit for admitting undergraduate students and for PhD students but nothing for anyone in between. I doubt if this will go unchanged. If undergraduates and PhD students are to be institutional indicators then we will see seriously negative backwash effects with masters programs being phased out and marginal students being herded into doctoral programs.

The new methodology is less diverse than appears from a simple count of the number of indicators. It is heavily research orientated. As noted, more than half of the weighting goes to a bundle of research indicators. However, economic activity/innovation is for this year nothing more than research income.

Adding to the emphasis on research, the institutional indicators include the number of doctorates awarded and the the ratio of doctorate to bachelor degrees awarded. Under institutional indicators there is a survey of teaching but the respondents are largely selected on the basis of their being authors of academic articles published in ISI indexed journals. There seems to be no evidence that the respondents do very much teaching and if Thomson Reuters include researchers with a non-university affiliations, of whom there are many in medicine and engineering ,then it is likely that many of those called upon to evaluate teaching have never done any teaching at all. Meanwhile student faculty ratio, a crude measure of teaching quality, has been removed.

It is regrettable that QS has apparently decided to keep the international students indicator. This has caused demonstrable harm to universities in several countries by encouraging the recruitment of students with inadequate linguistic and cognitive skills. One modification that THE should consider if they want to keep this measure, is declaring the EU a single entity. That was supposed to be the point of the Bologna process.


The proposed rankings include several indicators related to university income including research income. This is not a bad idea. After all, the provision of adequate funds is a necessary although far from a sufficient condition for the attainment of a reasonable level of quality. The inclusion of research income will, however, be detrimental to the interests of institutions like LSE that focus on the humanities and social science.


There are still unanswered questions. Some of these indicators will be scaled by dividing by the number of faculty. There will be many raised eyebrows if universities are required to include teaching staff who do no research in the measures of research output or research only staff in the other indicators. Whatever decision is made there is bound to be acrimonious wrangling.

Also unstated is the period from which the data for publications and citations are drawn. The further back the data collectors go the better for traditional elite universities. It is also not stated whether they will count self citations or publications in conference proceedings that are not rigorously reviewed.

So, if you want rankings that emphasise research and funding then THE and Thomson Reuters may be heading, somewhat uncertainly, in the right direction but perhaps at the price of neglecting other aspects of university quality.