Tuesday, September 21, 2010

From the Students' Room

There are some interesting comments about the latest THE rankings at their "The Students' Room"

Here is one from Martin

I appreciate that measuring the research impact of an institution is difficult. Unfortunately, the THE seems to have got it quite badly wrong this time. The best evidence for this is not that Warwick doesn't make it into the top 200 (even though this hurts; yes I am faculty at Warwick), but the fact that the University of Alexandria makes it number 4, based on the output of one single person.


Some suggestions follow:


1. Do not count self-citations. Even better, it is common sense that a citation by someone further from your field should count more than a citation by your former PhD student. Of course, these things are difficult to figure out in an automated way. But one could for example use the collaboration distance (coauthors have distance one, coauthors of coauthors have distance two, etc) to weight citations (with a cap at 4, say).

2. Quality of research is paramount. As already pointed out, it is easy to get many citations for mediocre research if there are sufficiently many other mediocre researchers working in the same area. Again, this is vastly more common than you may think, for the simple reason that it is easier to perform mediocre research than world-class research. Furthermore, you get more recognition as counted by basic citation count, so why not doing it?

One way of taking this into account is to give higher weight to citations coming from articles published in highly respected journals. (Similarly, when measuring "research output", higher weight should be given to articles published in journals with high reputation.)

However, please *DO NOT* use the impact factor as a measure of the quality of a journal, as it can be (and is!) very easily manipulated, as the story of the journal "Chaos, Solitons and Fractals" shows. Instead, the only reliable way of assessing the quality of a journal within a given field is to ask researchers in that field to provide their own rankings. Yes, this seems subjective, but unfortunately that's all you are ever going to get, and I can assure you that you will get a very consistent picture within each area. The fact that the "Annals of Mathematics" is the most respected journal in mathematics simply cannot be measured in terms of impact factor.

3. Count current citations to older work. If someone's article turns out to spawn an entire new field of research five years later, it will not show up at all in the current metric. This makes simply no sense. Of course, this doesn't happen all that often, but the reason why top institutions have a reputation is precisely because of those instances in which it happens. Furthermore, there are areas of research (like mathematics) in which the "lifespan" of a good article is measured in decades, which goes way beyond the two to five years that you use as a rule. Counting current citations to older articles would be one small but absolutely essential step to correct this.

4. Measure the total impact of the institution in a field, and not its "average" impact. The only way I could see that the output of one single person can count so much is that this person somehow has an abnormally high weight, probably due to the fact that there is very little research output from the U. of Alexandria. If this suspicion is indeed correct (I hope that I am wrong on this one), then this would effectively mean that universities are penalised by having large (and influential!) departments and should rather strive to have only very few but quite prolific researchers on their payroll.

There is probably more, but I am getting hungry now ;-) I very much hope that you will take these comments to heart. Best wishes,

Martin

No comments: