Monday, December 24, 2007

Cambridge and Harvard

The THES-QS rankings can be viewed as a collection of complex interweaving narratives. There is the rise of China and its diaspora, the successful response of Australian universities to financial crisis, the brave attempts of Africa, spearheaded by the University of Cape Town, to break into the top 200.

The most interesting narrative is that of British universities -- Oxford, Cambridge and Imperial and University Colleges, London -- steadily coming closer to Harvard and pulling ahead of Princeton, Caltech and the rest.

This particular narrative requires rather more suspension of disbelief than most. By all accounts, including the Shanghai rankings and THES’s own count of citations per faculty, the research record of Cambridge and Oxford has been less than spectacular for several years.

Until this year Cambridge’s apparent near equality with Harvard was largely the result of its performance on QS’s survey of academic opinion, the so-called peer review. Since this has such an astonishingly low response rate, since it is noticeably biased against the US, since its relationship with research proficiency measured by citations per faculty or per paper is very limited, it should not be taken seriously.

This year methodological changes mean that the differences between Cambridge and Harvard on most measures are virtually obliterated. Both universities get 100 or 99 for the “peer review”, employer review and student faculty ratio. Both get 91 for international students.

Harvard stays ahead of Cambridge because of a much better performance on citations per faculty. I thought it might be interesting to see how this margin was achieved.

QS is now using the Scopus database for which a 30-day free trial is available. THES states that the consultants counted the number of citations of papers published between 2002 and 2006 and then divided the total by the number of faculty. I have tried to reproduce QS's scores for Cambridge and Harvard

First, here is the number of papers published by authors with an affiliation to “Cambridge University” between 2002 and 2006 and the number of citations of those papers. The number of documents in the Scopus database is increasing all the time so a count done today would yield different results. These numbers are from two weeks ago.

CAMBRIDGE (“Cambridge University”) 2002-2006


Life sciences

Documents 7,614

Citations 116,875

Health sciences

Documents 4,406

Citations 65,211

Physical sciences

Documents 11,514

Citations 100,225

Social sciences

Documents 2,636

Citations 24,292


Total

Documents 26,170

Citations 306,603

Using the FTE faculty figure of 3,765 provided by QS on their website, we have 83 citations per faculty.

I noticed that a number of authors gave their affiliation as “University of Cambridge”. This added 26,710 citations to make a total of 333,313 citations and 89 citations per faculty.

Now for Harvard. Searching the Scopus database reveals the following totals of papers and citations for “Harvard University”.

HARVARD ("Harvard University") 2002-2006

Life sciences

Documents 4,003

Citations 79,663


Health science

Documents 2,577

Citations 47,486

Physical Science

Documents 6,429

Citations 91,154

Social Science

Documents 3,686

Citations 48,844

Total

Documents 16,695

Citations 267,147

I suspect that most observers would consider Cambridge's superiority to Harvard in number of publications and citations indicative more of the bias of the database than anything else.


If we use QS’s faculty headcount figure for Harvard of 3,389 and assume that 8 per cent of these are part-timers with a quarter-time teaching load then we have 3,167 FTE faculty. This would give us 84 citations per faculty, slightly better than Cambridge if citations of “University of Cambridge " publications are excluded and somewhat worse if they are included.


The problem is, though , that QS give Harvard a score of 96 for citations per faculty and Cambridge a score of 83. The only plausible way I can think of for Harvard to do so much better when they have fewer citations is that a smaller faculty figure was used to calculate the citations per faculty number for Harvard than was used to calculate the student faculty ratio. The Harvard web site refers to "about [sic] 2,497 non-medical faculty" and in QS’s school profile of Harvard there is a reference to "more than 2,000 faculty". I suspect that this number was used to calculate the citations per faculty score while the larger number was used to calculate the student faculty ratio. Had the former been used for both criteria, than Cambridge and Harvard would have been virtually equal for citations and Cambridge would have moved into the lead by virtue of a better international faculty score.

The may be some other explanation . If so , then I would be glad to hear it.

If this is what happened then it would be interesting to know whether there was simply another run of the mill error with that ubiquitous junior staff member using two different faculty figures to calculate the two components or a cynical ploy to prevent Cambridge moving into the lead too early.


Sunday, December 16, 2007

What Happened to Cardiff?

The Malaysian Star (print edition 16/12/07, E11) has a feature on Brian Smith, Vice-Chancellor of Cardiff University from 1993 to 2001. (The Star reports that he was appointed in 2001)

Professor Smith is reputed to have revitalised the research capability of Cardiff . According to the Star:

Said Prof Smith: “Cardiff offered a fantastic opportunity.

“Here was a university that had been through very difficult times; it was the perfect opportunity to try out my theories.

“And they worked because the people at Cardiff were ready for change and ready to change dramatically.”

The main problem faced by the university at that time was that it had not yet re-established itself as a research university.

According to Prof Smith, there are a number of factors involved in the move to regain a university's research strength.

“A very big factor is research staff.

“Because British universities have a great deal of autonomy and flexibility, we were able to go out and recruit.”

And that was how Prof Sir Martin Evans, one of this year's Nobel Prize in Medicine recipients, came to join the university.

“He came to a department that was not strong but actually managed to increase its number of publications in top journals 11-fold,” said Prof Smith.

...........................................................................

Asked how he managed to attract top people like Prof Evans to join him at Cardiff, Prof Smith said he believed what counted was not just a lucrative contract but the whole package.

“I don't think it's entirely about money. I feel that Prof Evans was equally attracted by the opportunity to unify the entire biology department and direct its vision,” he observed.

To encourage productivity, Prof Smith switched the promotion system from a quota-based system (where the total number of professorial positions in a faculty were pre-determined) to a performance-based one.

He even offered an attractive retirement package to faculty members who were not producing much research.

However, in order for universities to be able to do that, Prof Smith said they need autonomy.

“The university has to be free to offer different contracts (to academics and scientists).

“And within the university, a lot of power needs to be devolved to the young people.

“It's all about having decisions taken at the lowest level practicable.

“That’s a major change,” he said.


The article proceeds:

Due in large part to these strategies, Cardiff has risen from a ranking of 241 in the THES-QS World University Rankings in 2005 to 99 this year.

It may well be true that Cardiff researchers became more productive because of Professor Smith's policies. A quick look at the Scopus database indicates that from 1997 to 2007 the total output of research papers rose three fold.

It is also undeniable that Cardiff rose to 99th place in the THES-QS rankings this year.


It does not, however, follow that those two facts had anything to do with each other. For a start, one wonders why the rankings should detect the improvement in research only in 2007 and not in 2005 or 2006.

What really happened?

In 2006 Cardiff scored reasonably well on the "peer review" (151th out of the overall top 400 universities), employer review (91st), student faculty ratio (111th), international faculty (116th), international students (111th) but miserably on citations per faculty (253th).

In 2007 Cardiff did better on the "peer review", rising to 129th but worse on the employer review , falling to 250th. The other criteria were pretty much the same: 138th for student faculty ratio, 106th for international faculty, 110th for international students and 269th for citations per faculty.

It seems that Cardiff's remarkable improvement between 2006 and 2007 resulted from getting many more points for citations, 65 in 2007 as against 6 in 2006. This is far greater than any improvement resulting from a new database and is almost certainly caused by the introduction of Z scores this year.

What happened was that in 2006 Cardiff was doing OK on most measures but badly on research. In 2007 it was still doing OK on most measures , except for the employer review, and still doing badly for research. But in 2007 because of the smoothing of the curve, it got a lot more points for the limited amount of research that it did.

The rise of Cardiff is largely an illusion created by a change in method.

Thursday, December 13, 2007

Comment on the THES-QS Rankings

There is an excellent article by Andrew Oswald of Warwick University in yesterday's Independent. It is worth quoting a large chunk of it here.

First, 2007 saw the release, by a UK commercial organisation, of an unpersuasive world university ranking. This put Oxford and Cambridge at equal second in the world. Lower down, at around the bottom of the world top-10, came University College London, above MIT. A university with the name of Stanford appeared at number 19 in the world. The University of California at Berkeley was equal to Edinburgh at 22 in the world.

Such claims do us a disservice. The organisations who promote such ideas should be unhappy themselves, and so should any supine UK universities who endorse results they view as untruthful. Using these league table results on your websites, universities, if in private you deride the quality of the findings, is unprincipled and will ultimately be destructive of yourselves, because if you are not in the truth business what business are you in, exactly?

Worse, this kind of material incorrectly reassures the UK government that our universities are international powerhouses.

Let us instead, a bit more coolly, do what people in universities are paid to do. Let us use reliable data to try to discern the truth. In the last 20 years, Oxford has won no Nobel Prizes. (Nor has Warwick.) Cambridge has done only slightly better. Stanford University in the United States, purportedly number 19 in the world, garnered three times as many Nobel Prizes over the past two decades as the universities of Oxford and Cambridge did combined. Worryingly, this period since the mid 1980s coincides precisely with the span over which UK universities have had to go through government Research Assessment Exercises (RAEs). To hide away from such inconvenient data is not going to do our nation any good. If John Denham, the Secretary of State for Innovation, Universities and Skills, is reading this, perhaps, as well as doing his best to question the newspapers that print erroneous world league tables, he might want to cut out these last sentences, blow them up to 100 point font, and paste them horizontally in a red frame on his bedroom ceiling, so that he sees them every time he wakes up or gets distracted from other duties. In his shoes, or out of them, this decline would be my biggest concern.

Since the 1980s the UK's Nobel-Prize performance has fallen off. Over the last 20 years, the US has been awarded 126 Nobel Prizes compared to Britain's nine.


The THES-QS rankings have done great damage to university education in Asia and Australia where they have distorted national education policies, promoted an emphasis on research at the expenses of teaching and induced panic about non-existent decline in some countries while encouraging false complacency about quality in others.

In the United Kingdom they have generally been taken as proof that British universities are the equals of the Ivy League and Californian universities, a claim that is plausible only if the rankings' numerous errors, biases and fluctuations are ignored.

I hope that Chris Patten and others who are in denial about the comparative of British universities will read Professor Oswald's article.

Wednesday, December 12, 2007

Student Faculty Ratios

Something especially striking about the THES~QS rankings this year is that British universities have done spectacularly well overall while getting miserable scores, comparatively speaking, on the citations section. We have to remember that this component does not measure the absolute numbers of citations but the number per faculty. It is then worth investigating whether the high score for student faculty ratios are the result of inflated faculty numbers which have also led to a reduced score for citations per faculty. First, I want to look at the faculty data for the top British and American universities.

Cambridge

Looking at the QS website we find that they claim that Cambridge has a total of 3,765 Full Time Equivalent (FTE) faculty. The data was entered on 23/8/07 by Saad Shabbir, presumably an employee of QS.

Going to the Cambridge site we find that as of July, 2005, Cambridge had 1,558 academic staff, 1,167 academic-related staff (presumably in computers, administration, libraries and so on and probably also research) and 2,497 contract research staff. Adding the first and third categories and leaving out the second, gives us 4,055, close to QS’s figure for total faculty.

It seems reasonable then to conclude that QS added academic staff to research contract staff and made an adjustment to arrive at a Full Time Equivalent (FTE) number to come up with the total faculty. No doubt they got more up to date information than is available on the university website.

With 18,309 FTE students this gives us a student faculty ratio of 4.9. This is much better than the data from third party sources. The Higher Education Statistics Agency (HESA) provides a figure of 11.9.

It looks like QS have counted both teaching staff and contract research staff who do little or no teaching as faculty members.

Oxford

According to QS Oxford has 3,942 FTE faculty (data entered by Saad Shabbir 21/08/07) and 18,667 FTE students, a ratio of 4.7 students per faculty.

According to Oxford there were (July 2006) 1,407 academic staff, 612 in administration and computing, 169 library and museum staff, 753 in university funded research, 2,138 in externally funded research and 15 in self-funded research (all FTE). All this adds up to 4,094, very close to QS’s figure. It seems that for Oxford, QS has included research and other staff in the total faculty.

According to HESA Oxford has 13 students per faculty.


Imperial College London

The QS site indicates that Imperial has 2,963 FTE faculty and 12,025 FTE students (data entered by Saad Shabbir 21/08/07), a ratio of 3.03.

The Imperial site indicates 1,114 academic staff and 1,856 research staff (FTE 2006-7), a total of 2,970 academic and research staff combined. It would seem that QS have again counted research staff as faculty. This site refers to a 12,509 student load and a student staff ratio of 11.2. The HESA ratio is 9.4.

Harvard

According to QS, the Harvard faculty headcount is 3,369 (data entered by Baerbel Eckelmann 8/07/07). There were 29,000 students by headcount (FTE 16,520).The headcount student faculty ratio is 8.6.

According to the United States News and World Report (USNWR), 8% of Harvard’s faculty are part-time. If part time means doing a quarter of a full time teaching load this means that Harvard’s FTE faculty would be 3,406.The FTE student faculty would then be 4.8.

The Harvard site, however, refers to a much smaller number of faculty, 2,497 non-medical faculty and to 20,042 students, making a ratio of 8.0.The USNWR indicates a ratio of 7 for Harvard (2005).


Something strange about QS’s data is that it refers to a headcount of 13,078 and 3,593 FTE undergraduates. This is something that definitely needs explaining.


Yale

According to QS, the number of faculty by headcount is 3,248. The number of students is 11, 851 by headcount and 10,845 FTE. The headcount student faculty ratio is then 3.6.

According to the Yale site, there are 3,384 faculty and 11,358 students, a ratio of 3.4. (All figures from the 2006-7 academic year.)

For the fall of 2006 the faculty headcount included:

Tenured faculty 906

Term 966

Nonladder 903

Research 609

The USNWR ratio for Yale is 6.

Princeton

According to QS, the faculty headcount was 1,263 (entered by Baerbel Eckelmann 09/07/07). The number of students was 6,708 by headcount and 6,795 FTE. The headcount ratio is 5.3

According to the Princeton site, there are more than 850 FTE faculty and 7,055 students, a ratio of 8.3. USNWR has a ratio of 5.

Conclusion

It seems that QS’s policy is to include any sort of research staff, whether or not they do any teaching, in the category of faculty. In some cases, other professional non-teaching staff are also included. This produces student faculty ratios that are noticeably better than those that can be calculated from, and sometimes specifically stated in, the universities’ web sites or that are provided by other sources. It looks as though British universities have benefited from this more than their American counterparts.

This means, very ironically, that this measure, which is supposed to be a proxy for teaching quality, is to a large extent a reflection of a university’s commitment to research since the employment of large numbers of researchers, or even librarians and computer programmers, would lead to an improvement in this ratio.


It also looks as though leading British universities are favoured disproportionately by this procedure although a definite conclusion would have to wait more extensive analysis.


I think that we can put forward a working hypothesis that British universities have been ascribed inflated faculty numbers and that this contributes to high scores for teaching quality as measured by student faculty radio and to low scores for research as measured by citations per faculty.

Sunday, December 09, 2007

Macquarie Update

Here is a little bit about Steven Schwartz, Vice-Chancellor of Macquarie University, that I just came across at Wikipedia.

Schwartz was named one of the 100 highest cited researchers in his field and he received many recognitions including a World Health Organisation
Fellowship, a NATO fellowship and the Australian Academy of Science-Royal
Society (London) Exchange Fellowship. He was elected by his peers to the
Academy of Social Sciences and he was elected Morris Leibovitz Fellow at the
University of Southern California. Schwartz is a Fellow of the Royal Society of the Arts, the Australian Institute of Company Directors, and the Australian Institute of
Management. He was a visiting Fellow of
Wolfson College, Oxford and he won
the Brain Research Award of the
British Red Cross Society. He was
elected the first President of Sigma Xi, The Scientific Research Society in
Australia and was awarded the distinguished Career Scientist Award by the
National Institutes of Health. He served on the editorial boards of many
scientific journals and was a fellow of many learned societies.

I assume that Wikipedia is not in error and that Dr. Schwartz does in fact have a highly distinguished research and academic record.

It is therefore very surprising that Dr. Schwartz, has apparently shown an extreme degree of carelessness. He has stated in the Sydney Morning Herald (SMH) that Macquarie University's fall in this year's THES-QS rankings was because there was a change in the weighting that the rankings gave to the international students section. He also said that this was the reason for LSE's fall.

At the risk of being repetitive, let me point out again that the weighting of the international students section has nothing to do with Macquarie's fall. It was five per cent in 2004 , 2005 and 2006 and in 2007. LSE fell because the consultants began using Z scores this year. This is a common statistical technique that has the effect of smoothing out scores. LSE fell, not because of the any change in the weighting but because other universities lagging behind on this measure got more points this year and therefore overtook LSE on the overall ranking.

I will repeat again that Macquarie fell in the rankings firstly because of a poor showing, like several other Australian universities, on the "peer review". This might have resulted from fewer responses from Australian universities this year or from respondents not being allowed to vote for their own universities or a combination of the two.

There was also a fall in its placing for international faculty. The overall effect of this limited by the small weighting given to this criterion.

There was a fall in the citations per faculty section matched by a similar rise in the student faculty ratio. These two changes, which effectively cancelled each other out, might have been caused by a decrease in the reported number of faculty which would have a good effect on citations per faculty and a bad effect on the student faculty ratio.

It is also possible that the high score for international faculty in 2006 might also have resulted from a low reported figure for total faculty.

I would like to ask a few questions.

Did Dr Schwartz read the THES' s description of its methodology?

If he did, did he really misunderstand the description?

Dr Schwartz is reported to get a bonus of A$100,000 when Macqaurie rises in the rankings. Why did this not encourage him to read about the methodology of the rankings carefully?

Why did SMH allow Dr Schwartz to publish an article in which he criticised the the newspaper for not referring to this change of rankings, when there is in fact no such change?

Will SMH point out to Dr Schwartz that there was no change in the weuighting and request an apology from him?

Will Dr Schwartz investigate how QS gave Macquarie such a high and presumably incorrect score for international faculty in 2006?

Thursday, December 06, 2007

The Politics of Rankings: The Case of Macquarie

The Sydney Morning Herald has an article by Steven Schwartz, Vice-Chancellor of Macquarie University. He begins by arguing that university rankings cannot capture the full complexity of a large modern university. A good point, although it would have been more convincing had it been made before rather than after Macquarie's spectacular fall in the THES-QS rankings.

Schwartz goes on to say that:

Although those who work in universities know the pace of change is glacial, university rankings can change dramatically. For example, the Times Higher Education Supplement in Britain dropped Macquarie more than 80 places down the ranks in one year - front-page news in this newspaper. Was the previous ranking incorrect? Is the present one more accurate? The answer in both cases is no.

The changed ranking resulted from a decision by the publication to reduce the weight given to international students, so that many universities with large international enrolments dropped down the rankings. The prestigious London School of Economics dropped from 17 to 59. By omitting mention of this change in method, the Herald's report on November 9 produced more heat than light.

This is an extraordinary claim. There has been no change whatsoever in the weighting given to international students in the THES-QS rankngs. It is five per cent this year just as it has been since 2004.

Macquarie has fallen in the rankings for two reasons. First it fell from 93rd position in the survey of academic opinion to 142nd (among the overall top 400 universities). This could be because QS, the consultants who collect the data for the rankings, did not allow respondents to vote for their own institutions this year or because the number of respondents from Australia was lower.

Second, in 2006 Macquarie was in first place for international faculty, meaning that QS, must have thought that at least half of Macquarie's faculty were international. This year the rankings have Macquarie in 55th place for international faculty. This represents, according to the QS website (registration required), a figure of 25 % for international faculty.

Dr Schwartz would be well advised to find out how QS received incorrect information about international faculty in 2006.

The international students section had nothing at all to do with Macquarie's fall.

Dr Schwartz would probably claim that he has better things to do than read about the methodology of the rankings. I would entirely sympathise with him although perhaps he should be more careful when writing about them or hire an assistant that would read them carefully.

Wednesday, December 05, 2007

Reactions to the Rankings

There is a very interesting article by Moshidi Sirat at GlobalHigherEd. He notes that reactions to the latest THES-QS rankings in the UK have varied widely. There is a lot of scepticism there but many universities are developing explicit strategies to boost their performance, with the aim of recruiting more international students. In Australia, there has been much debate, especially among universities that did not do so well. Brazilian universities do not seem to have shown much interest.

Sirat also notes:

A colleague in France noted that the manner Malaysia, especially the Malaysian Cabinet of Ministers and the Parliament, reacted to Times Higher rankings is relatively harsh. It appears that, in the specific case of Malaysia, the ranking outcome is being used by politicians to ‘flog’ senior officials governing higher education systems and/or universities. And yet critiques of such ranking schemes and their methodologies (e.g., via numerous discussions in Malaysia, or via the OECD or University Ranking Watch) go unnoticed. Malaysia better watch out, as the world is indeed watching us.


In a little while I hope to comment on the relative performance of Malaysian universities over the last few years. Reality is very different from the alleged ongoing decline presented by THES-QS.

Tuesday, December 04, 2007

Something about the QS "Peer Review"

QS's topuniversities site (registration required) has some information about the 2007 survey of academic opinion, which they insist on calling a peer review and which carries a 40% weighting in the THES-QS world university rankings.


First, there is list of the subject areas and the number of respondents in 2007:

All areas 43
Arts and Humanities 312
Engineering and IT 810
Life Sciences and Biomedicine 339
Natural Sciences 776
Social Sciences 715

There is also a section that cross-references the respondents' chosen geographical region of expertise and their subject area.

An interesting item is the current location of the respondents:


United States
307
Italy
174
United Kingdom
171
New Zealand
125
Canada
123
Australia
108
India
106
Malaysia
99
Germany
92
Belgium
79
Singapore
76
France
74
Spain
70
Japan
58
Hong Kong
57
Philippines
56
Sweden
52
China
50
Ireland
47
Switzerland
46
Austria
41
Denmark
37
Indonesia
36
Brazil
33
Turkey
33
Portugal
27
Mexico
26
Poland
26
South Korea
25
Argentina
23
South Africa
23
Greece
22
Iran
22
Russia
21
Taiwan
21
Netherlands
19
Thailand
19
Finland
16
Other
586

Is it possible to keep a straight face while maintaining that this a representative sample of international academic opinion? More respondents from the UK and Italy combined than from the US. More from New Zealand than from Germany. Almost the same number from Hong Kong and Japan. More from Ireland than from Russia. More from Belgium than from France.

Monday, December 03, 2007

Manipulating the Rankings?

There is a very interesting post at Wouter on the Web. I thought it worth pasting all of it.

From the university newspaper of Groningen we get some interesting insights in the way Groningen University has optimized their data for submission to the THES rankings. Deemed not to be important, the rector nevertheless wanted Groningen University to score better in the THES-QS rankings. For the rector, the first notation in the top 200 of the THES rankings, 173 to be exactly, was a good reason to celebrate with his subordinates.

What did they do? They concentrated on the questions of the most favourable number of students. The number of PhD students was a number they could play with. In the Netherlands PhD students are most often employed as faculty, albeit they are students as well to international standards. They contemplated on the position of the researchers in the University hospital. This would increase the number of staff considerably and thus lower the student/faculty ratio, but on the other hand this could have an important effect on the number of citations per research staff as well. Increases in staff number will lower the citations per staff. Which is detrimental to the overall performance. However, if they only could guarantee that citations to hospital staff were included in the citation counts as well?

So in Groningen they have exercised through some scenarios of number of students, number of staff, student/staff ratio and citations/staff ratio to arrive at the best combination to enhance their performance. I really do wonder if the contact between Groningen and QS -the consultants establishing the rankings- did also lead to the improvement of the search for citations by including the University Hospital for the university results. It is known from research by CWTS that searches for papers from all parts of the university are notoriously difficult. Especially to include the papers produced by staff from the teaching hospitals. In Groningen they have the feeling that it helped what they did in their contacts with QS. Well, at least it resulted in a nice picture on their university profile page.

Optimization or manipulation? It is only a thin line. If you only could make sure that all staff of your university would use the proper name of the institution in the authors affiliation. The university would gain a lot.

Chris Patten, Oxford and the Rankings

On 21st November, the London Spectator had an article by Chris Patten, former governor of Hong Kong, that referred to the THES-QS international university rankings and the high place that they give to Oxford and Cambridge.

In the last month, another respected international survey
placed Oxford and Cambridge joint second to Harvard in the league table of
world-class universities. This confirms what others have suggested in recent
years. Moreover, other British universities — most notably London’s Imperial
College and University College — came out high on the list. There are, alas, too
few areas of our national life — the armed forces, the City of London, our
diplomatic service — where we do as well in global comparisons. And it
matters.


Patten suggests that the strength of Oxford and Cambridge lies in the balance between the colleges and the universities and that this is reflected in their performance in the rankings.
Patten is not the first to comment on the apparently excellent performance of British universities, Oxford and Cambridge especially, compared to other national institutions, not least the increasingly pathetic football team. There is a slight touch of desperation here. Even if we can’t beat Croatia or Macedonia, at least Cambridge and Oxford can still run rings around the Universities of Zagreb or Skopje or even Berkeley or Johns Hopkins. Nor is he the first to refer to the THES-QS rankings when commenting on the question of reorganising major British universities. The rankings have, for example, been used to bolster Imperial College London’s claim to become fully independent of the University of London.

But there are some very dubious claims here. I am not sure what Patten is referring to when he talks about another respected international survey. It is certainly not the Shanghai rankings which have Oxford in tenth place and Cambridge in second overall by virtue of long dead Nobel laureates and much lower down by more contemporary criteria.

As for being respected, while the THES-QS rankings are avidly followed in Australia and Southeast and East Asia and routinely used in advertising by British universities, they are usually ignored or politely dismissed by American schools. Washington University in St Louis has apparently not even noticed that QS think that they have done almost no research at all over the last few years.

In fact, even QS does not provide much evidence that Oxford is a world-beater. In 2006 it did extremely well on the peer review and very well on the recruiter review but posted a mediocre performance on everything else, especially research as measured by citations per faculty, 63rd , behind the Hebrew University of Jerusalem, the Tokyo Institute of Technology and the University of Naples 2.

In fact, it isn’t that Oxford is that bad at research but that QS apparently inflated the numbers of faculty to get a good faculty student ratio at the cost of an unrealistically bad score for research. Still, it looks as though for research Oxford is now trailing around the middle to bottom of the Ivy League.

And this year? QS has introduced a new scoring system that in effect compresses scores at the top. So, Oxford did well for nearly everything with 100 for peer review, recruiter review, and student faculty ratio, 97 for international faculty and 96 for international students. This does not mean very much. The better universities now get high marks for just about everything. So does Oxford but again, according to QS, it lags behind on citations per faculty in 85th place behind Colorado State University, Showa University and Georgia Institute of Technology
Again, the problem probably is not that Oxford researchers are doing little research or not getting cited enough but that QS is using an inflated faculty figure.

Still it seems clear that Oxford’s position in the rankings is derived from a dubious “peer review” and from a scoring system that blurs differences at the top of the scale. It is not a result of measured research excellence. The THES-QS rankings are simply covering up the relative decline of Oxford and Cambridge.