Friday, August 10, 2007

Correcting Errors

Another site worth reading on law school rankings is MoneyLaw. There is an interesting post on the US News and World Report's correction policy. It seems that if USNWR makes a mistake the rankings are corrected but if the school is responsible the underlying data but not the rankings are corrected.

MoneyLaw comments:

"I wouldn't call that grossly unfair. Academic research would perhaps demand more attention to setting the record straight, granted. But USN&WR's rankings hardly constitute academic research."

I would add that this policy seems dramatically better than that of most other rankings
Self-assessment by American universities

State universities and colleges have come up with a plan to publish essential information on their web sites.

The National Association of State Universities and Land-Grant Colleges and the American Association of State Colleges and Universities are , according to the Wall Street Journal,

" designing a template for college Web sites that, for those that opt to use it, shows in standard format: (1) details about admission rates, costs and graduation rates to make comparisons simple; (2) results from surveys of students designed to measure satisfaction and engagement, and (3) results of tests given to a representative sample of students to gauge not how smart they were when they arrived, but how much they learned about writing, analysis and problem-solving between freshman and senior years.

The last one is the biggie. Participating schools will use one of three tests to gauge the performance of students with similar entering SAT scores at tasks that any college grad ought to be able to handle. One test, the Collegiate Learning Assessment, gives students some circumstance and a variety of information about it, and asks for short essays (no multiple choice) on solving a problem or analyzing a scenario. Under the state schools' proposed grading scale, 70% of the schools will report that students did "as expected," given their SATs. An additional 15% will report they did better or much better than expected, and 15% will report students did worse or much worse than expected."


This seems like a good idea. It could even go some way towards making commercial rankings redundant.


Saturday, August 04, 2007

The Wuhan Rankings

I have just noticed, via Wikipedia, a world university ranking by the Research Centre for China Science Evaluation at Wuhan University that seems to be based on current research productivity. Since the website does not have an English version, it is not possible to comment very much about it at the moment. According to Wikipedia it " is based on Essential Science Indicators (ESI), which provides data of journal article publication counts and citation frequencies in over 11,000 journals around the world in 22 research fields". If anyone can look at the website and tell me what the research fields are and what period is covered I'd be grateful.

I noticed some errors in the rankings. Laval University is located in France not Canada, York in the US, not Canada, and Bern in Sweden. Ljubljana is listed as being in "Jugoslavia", a few years out of date.

If the rankers have assessed a broad range of subjects and if they have looked at a recent period and if their methods are valid they may have produced a ranking of research achievement that is more current than the Shanghai index which includes decades-old Nobel and Fields prize winners. The ranking gives low positions to Cambridge and Oxford confirming suspicions that their high rating by THES -QS is unjustified. Princeton and Yale (strengths in the humanities?) have relatively low places. So do Chicago (strength in the social sciences?) and Caltech.

There are some more surprises. Texas is at number 2. Maybe this represents a genuine advance or perhaps the presence of a large medical school has something to do with it. "Univ Washington" is at number 3. This most probably means Washington University in St Louis. Before getting too excited about this result I would like to be sure that there has been no confusion with The University of Washington, Washington State University and George Washington University.



Here are is the top 20 along with the scores. Harvard at the top gets 100. The full ranking can be found here.



1. Harvard 100

2. Texas 87.49

3. "Univ Washington" 72.39

4. Stanford 71.91

5. Johns Hopkins 71.45

6. UC Berkeley 70.76

7. UCLA 70.38

8. Michigan 69.11

9. MIT 68.62

10. Toronto 66,90

11. Wisconsin 64.83

12. Columbia 64.71

13. UC San Diego 64.54

14. Pennsylvania 64.42

15. Cambridge 62.93

16. Minnesota 62.80

17. Yale 62.20

18. Cornell 62.19

19 UC San Francisco 61.52

20. Duke 60.60

Friday, August 03, 2007

International Students in Malaysia

QS Quacquarelli Symonds seems to have bad luck with international students in Malaysia. Their topuniversities site has a piece on "Study Abroad in Malaysia" which states

"On the back of its enduring economic and industrial boom, Malaysia is trying hard to position itself as the Asian destination of choice for international students seeking to study abroad, and with some success. Currently there are around 50,000 students from 100 countries in Malaysian tertiary education, forming 20-30% of the student body - and the country wants to promote a multicultural image that reflects the country itself. "


The total number ot registered students in tertiary education in Malaysia is in fact about 732,000. International students therefore are well under ten per cent of tertiary students
Rankings Not a Target


According to the Kuala Lumpur New Straits Times, the vice-chancellor of Universiti Malaya has said that the university's international ranking

"should not be a target. Instead, UM’s main aim was to produce
quality work, she added"

Wednesday, August 01, 2007

Problems with Law School Rankings


Another blog worth looking at is Agoraphilia which, among other things, has posts on the US News and World Report law school rankings. One of them deals with University of Florida's receiving an erroneous and over-favourable rating by the USN&WR, apparently because it reported the LSAT scores and GPAs only for the fall 2005 intake and did not include those for the spring intake.


What most impresses me about this is that the Dean and Associate Dean of Florida's law school and Robert J. Morse, Director of Data Research at USN&WR, have replied promptly and at length to questions about ranking methods.

Friday, July 27, 2007

Research Guide: Educational Rankings



This is a very good page produced by Boston College with links to sites and articles on university rankings. For a start take a look at 'Playing with Numbers' by Nicholas Thompson.

Wednesday, July 25, 2007

A Mystery Solved

One of the more interesting elements in the Guide to the World's Top Universities by John O'Leary, Nunzio Quacquarelli and Martin Ince, published by QS Quacquarelli Symonds at the end of 2006, is the information about student faculty ratio provided in the directory of over 500 universities and the profiles of the world's top 100 universities.

These are, even at first sight, not plausible: 590.30 students per faculty at Pretoria, 43.30 at Colorado State University, 18.10 at Harvard, 3.50 at Dublin Institute of Technology.

Scepticism is increased when the the Guide's data for student faculty ratio is correlated with that derived from the scores out of 100 for this measure in the 2006 rankings and cross-checked with the data on individual universities on QS's topuniversities site. The correlation for 517 universities is negligible at .057 and statistically insignificant (2-tailed .195).

Comparing the two sets of data on student faculty ratio for the British universities in the rankings shows that the problem is with the information in the Guide, not that in the rankings. The rankings data correlates highly with that provided by the Higher Education Statistics Agency (HESA: see earlier post) (.712, sig = .000) and that taken from the web site williseemy tutor (.812, sig = .000). There is no significant correlation between the data in the Guide and the HESA data (.133, sig = .389) and that derived from williseemytutor (.179, sig = .250).

So, where did the Guide's student faculty data come from?

First, here are the most favourable student faculty ratios calculated from the scores in the rankings (they can be cross-checked at the topuniversities site) and rounded to the first decimal place.

Duke 3.5

Yale 3.7

Eindhoven University of Technology 3.8

Rochester 3.8

London Imperial College 3.9

Paris Sciences Po 4.0

Tsing Hua 4.1

Emory 4.1

Geneva 4.3

Vanderbilt 4.3


Now, here are the most favourable ratios given in the Guide.

Dublin Institute of Technology 3.5

Wollongong 3.7

Ecole Polytechnique 3.8

Rio de Janeiro 3.8

Llubljanja 3.9

Oulu 4.0

Trento 4.1

Edinburgh 4.1

Fudan 4.3

Utrecht 4.3


Notice that the ratio of 3.5 is assigned to Duke university in the rankings and to Dublin IT in the Guide. If the universities are arranged alphabetically these two would be in adjacent rows. Likewise, the other scores listed above are assigned to universities that would be next to each other or nearly so in an alphabetical listing.

Next are the least favourable ratios derived from the rankings data.

Pune 580

Delhi 316

Tor Vergata 53

Bologna 51

Cairo 49

Concordia 42


Now the ratios in the Guide.

Pretoria 590

De La Salle 319

RMIT 53

Bilkent 51

Bucharest 49

Colorado 42

Notice again that, except for Tor Vergata and RMIT, the ratio in the two data sets is shared by universities that are close or next to each other alphabetically.

The conclusion is unavoidable. When the Guide was being prepared somebody created a new file and made a mistake, going down one or two or a few rows and inserting the rankings data in the wrong rows. So, every university in the Guide's directory acquired a new and erroneous student faculty ratio.

Since this piece of information is the one most likely to interest future undergraduate students, this is not a trivial error.

Is this error any less serious than QS's getting the two North Carolina business schools mixed up?

Sunday, July 22, 2007

What is a Peer Review?

The third Asia Pacific Professional Leaders in Education conference was held in Hong Kong recently. The conference was organised by QS Quacquarelli Symonds (QS), consultants for the THES rankings, and a substantial part of the proceedings seems to have been concerned with international university rankings. There is a report by Karen Chapman in the Kuala Lumpur Star. There are hints that the methods of the THES-QS rankings may be revised and improved this year. The QS head of research, Ben Sowter, has referred to a revision of the questionnaires and to an audit and validation of information. Perhaps the deficiencies of previous rankings will be corrected.

There is also a reference to a presentation by John O'Leary, former editor of the THES, who is reported as saying that

“Peer review is the centrepiece of the rankings as that is the way academic value is measured.”

The second part of this sentence is correct but conventional peer review in scientific and academic research is totally different from the survey that is the centrepiece of the THES rankings.

Peer review means that research is scrutinised by researchers who have been recognised as authorities in a narrowly defined research field. However, inclusion in the THES-QS survey of academic opinion has so far required no more expertise than the ability to sign on to the mailing list of World Scientific, a Singapore-based academic publisher. Those who are surveyed by QS are, in effect, allowed to give their opinions about subjects of which they may know absolutely nothing. Possibly, the reference to redesigning the survey means that it will become more like a genuine peer review.

It cannot be stressed too strongly or repeated too often that, on the basis of the information released so far by QS, the THES-QS survey is not a peer review.




The Consequences of Ranking

There is an excellent post by Eric Beerkens at Beerkens' Blog reporting on an article by Wendy Nelson Espeland and Michael Sauder in the American Journal of Sociology. The article, 'Rankings and reactivity: How public measures recreate social worlds', describes how the law school rankings of the US News and World Report affect the behaviour of students, university administrators and others.

Beerkens argues that international university rankings also have several consequences

1. Firstly, rankings affect external audiences. Trivial differences between institutions may lead to large differences in the quality and quantity of applicants.

2. Rankings may amplify differences in reputations. If researchers or administrators are asked to assess universities of which they have no knowledge they are likely to rely on the results of previous rankings.

3. Resources such as grants distributed on the basis of rankings .

4. Universities will give up objectives that are not measured in rankings and try to become more like those who achieve high scores.

Saturday, July 21, 2007

Blog on University Rankings

There is a Spanish-language blog on university rankings and other academic matters by Alejandro Pisanty that is well worth looking at.

Tuesday, July 17, 2007

Somebody Else Has Noticed

Matt Rayner has posted an interesting question on the QS topuniversities site. He has noticed that in the Guide to the World's Top Universities, published by QS, Cambridge is supposed to have a student faculty ratio of 18.9 and a score of 64 for this part of the 2006 World Rankings while Glasgow, with an almost identical ratio of 18.8, gets a score of 35.

As already noted, this anomaly is not confined to Cambridge and Glasgow. The student faculty ratios provided in the data about individual universities in the Guide are completely different from those given in the rankings.

There is in fact no significant relationship, as a quick correlation done by SPSS will show, between the two sets of data.

It will be even more interesting to see when and how QS reply to Matt's question

Sunday, May 13, 2007

The University of Santo Tomas

Varsitarian, the newspaper of the University of Santo Tomas (UST) in the Philippines has published an article questioning the credibility of the THES-QS world university rankings.

The complaint appears to be valid although the newspaper makes several errors about the rankings.

Alberto Laurito, assistant to the rector for planning and development at UST, has claimed that QS got the number of students wrong. The consultants reported 11, 764 students whereas the correct number is 32,971. The university’s figure seems to be correct. An article by Guzman and Torres in the Asia Pacific Education Review reports 32,322 students in 2002-3. However, QS’s deflating of student numbers, if it were the only mistake, would work to UST’s advantage in a number of ways. Firstly fewer students mean fewer students per faculty, if the number of the latter is constant, and hence a lower score on the student–faculty ratio component of the rankings. Secondly, if the number of international students is the same, fewer students overall means a higher percentage of international students.

However, this is not QS’s only error. They report that UST has 524 faculty, making a student faculty ratio of 22.45. According to the article, in 2002-3 UST had 1500 faculty. With 32,322 students, this would mean a faculty student ratio of 21.55. QS has made two errors and they have pretty much cancelled each other out.

Laurito then complained:

that THES-QS research on peer review was also irregular, considering that it was worth 40 per cent of the entire survey when only 1,600 universities turned in their responses or about one per cent of the 190,000 needed

The low response rate does of course invalidate the “peer review” but it was individual academics who were surveyed, not universities.

Laurito then points out that UST got a zero for research citations:

The score is obtained through a research collation database maintained by Thomson, an information-based solutions provider, called Essential Science Indicators (ESI). For every citation given to a university researcher or professor, the university would acquire a point.

The procedure is not like this at all. Laurito continues:

Based also on the survey, UST received the lowest grade on international outlook (meaning UST has no international students or faculty) when the University actually has seven international professors and 300 international students.”

Again, not quite. UST gets a score from QS of 3.6 for international faculty and 0.6 for international students, representing 12 international faculty members and 47 international students.

Laurito has got the wrong end of several sticks but the basic point still remains that QS got the data for students, faculty and international students wrong.

The newspaper then quotes Laurito as saying:

We were told by the research representative (of THES-QS) that the data they used were personally given to them by a University personnel, but they were not able to present who or from what office it came from

If Laurito is reported correctly and if this is what the “research representative” told him, there is something very strange here.

IF QS have a documentary record of an e-mail or a phone call to UST how could the record not indicate the person or office involved?

If they do not, how can QS be sure that the information came from an official university source or that there was any contact at all?

Friday, May 11, 2007

More about Student-Faculty Ratios

I have just discovered a very good site by Ben Wilbrink, Prestatie-indicatoren (indicator systems). He starts off with "Een fantastisch document voor de kick-off", referring to a monograph by Sharon L. Nichols and David C. Berliner (2005), The Inevitable Corruption of Indicators and Educators Through High-Stakes Testing. Education Policy Studies Laboratory, Arizona State University pdf (180 pp.).

The summary of this study reports that:

"This research provides lengthy proof of a principle of social science known as Campbell's law: "The more any quantitative social indicator is used for social decisionmaking, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." "

This insight might well be applied to current university ranking systems. We have seen, for example, some US universities making it optional for applicants to submit their SAT results. It is predictable that good scores will be submitted to admissions officers, but not bad ones. Universities will then find that the average scores of their applicants will rise and therefore so will their scores on rankings that include SAT data.

I would like to propose a new law, an inversion of Gresham's. Good scores drive out bad.

Wilbrink has some good comments on the THES-QS rankings but I would like to focus on what he says about the student-faculty ratio.

"The faculty/student score (20%)The scores in this rubric are remarkable, to say the least. I do not think the student/staff ratio is less reliable than the other indicators, yet the relation to the world rank score seems to be nil. The first place is for (13) Duke, the second for (4=) Yale, the third for (67) Eindhoven University of Technology. Watch who have not made it here in the top twenty: Cambridge is 27th, Oxford 31st, Harvard 37th, Stanford 119, Berkeley 158. This is one more illustration that universities fiercely competing for prestige (see Brewer et al.) tend to let their students pay at least part of the bill.
"We measure teaching by the classic criterion of staff-to-student ratio." Now this is asking for trouble, as Ince is well aware of. Who is a student, who is a teacher? In the medieval universities these were activities, not persons. Is it much different nowadays? How much? ...


Every administration will creatively fill out the THES/QS forms asking them for the figures on students and teachers, this much is absolutely certain. If only because they will be convinced other administrations will do so. Ince does not mention any counter-measure, hopefully the THES/QS people have a secret plan to detect fraudulent data."

It is possible to test whether Wilbrink's remarks are applicable to the student-faculty scores for the 2006 THES-QS rankings. THES have published a table of student-faculty ratios at British universities from the University and College Union that is derived from data from the Higher Education Statistics Agency (HESA). These include further education students and exclude research-only staff. These results can be compared to the data in the THES-QS rankings


In 2006 QS reported that the top scorer for student-faculty ratio was Duke. Looking at QS's website we find that this represents a ratio of 3.48 students per faculty. Cross-checking shows that QS used the data on their site to construct the scores on the 2006 rankings. Thus, the site reports that Harvard had 3,997 faculty and 24,648 students , a ratio of 6.17 students per faculty, ICL 3,090 faculty and 12,185 students, a ratio 0f 3.94, Peking 5,381 faculty and 26,972 students, a ratio of 5.01, Cambridge 3,886 faculty and 21,290 students, a of ratio of 5 .48. These ratios yielded scores of 56, 88, 69 and 64 on the student-faculty component of the 2006 rankings.


Now we can compare the QS data with those from HESA for the period 1005-06. Presumably ,this represents the period covered in the rankings. If Wilbrink is correct, then we would expect the ratios of the rankings to be much lower and more favourable than those provided by HESA.

That in fact is the case. Seven British universities have lower ratios in the HESA statistics. The se are Cranfield, Lancaster, Warwick, Belfast, Swansea, Strathclyde and Goldsmith's College. In 35 cases the THES-QS score was much better. The most noticeable differences were ICL, 3.95 and 9.9, Cambridge , 5,48 and 12,.30, Oxford 5.70 and 11.9, LSE 6.57 and 13, Swansea, 8.49 and 15.1 and Edinburgh, 8.29 and 14.

It is possible that the differences are the result of different consistent and principled conventions. Thus one set of data might specifically include people excluded by the other. The HESA data, for example, includes further education students, presumably meaning non-degree students, but the THES-QS data apparently does not. This would not, however, seem to make much of a difference between the two sets of data for places like Oxford and LSE.

Both HESA`and QS claim not to count staff engaged only in research.

It is possible then that the data provided by universities to QS has been massaged a bit to give favourable scores. I suspect that this does not amount deliberate lying. It is probably more a case of choosing the most beneficial option whenever there is any ambiguity.

Overall, the ratios provided by QS`are much lower, 11.37 compared to 14.63.

Wednesday, May 09, 2007

Another Comment on QS and Kenan-Flagler

A blog by MBA student Shawn Snyder remarks:

"So CNN recently published its "Top 50 Business Schools to get Hired in 2007" and I was glad to see Maryland's Smith school listed, but I was confused to see the George Washington University right above Smith. After all, by their own ranking the GW grads had one less job offer and starting salary almost $10,000 lower. Umm, maybe recruiters think that George Washington is a better deal because they can snag grads for cheap, but from a business student perspective (the people reading the rankings) wouldn't Smith be the better choice? And why wouldn't it rank higher? Business rankings are crap in my opinion....and yet I still read all of them as if it matters. Maybe I have the problem."


And there is a comment by Dave:


" I too noticed some discrepancies in the ratings on CNN.com. Specifically, UNC Kenan-Flagler is not in the top 50! I dug a bit deeper and looked at the data from topmba.com - the website where the list came from - and found some startling errors. UNC KFBS average salary is listed as $76k when the actual average is $89k! I wrote a letter to TopMBA.com and found that not only did they screw up the salaries, but they did not distinguish between University of North Carolina and North Carolina State U in the recruiter rankings! It's really incredible the garbage that these people are allowed to print. What ever happened to 'trust but verify'?"
More on QS and Kenan-Flagler


There is an interesting post at Accepted Admissions Almanac about the QS-Kenan-Flagler affair. The writer remarks:


"It's safe to say that this mess is a nightmare for QS, CNNMoney, and Fortune. Providing and publishing rankings so sloppily slapped together is beneath criticism for an industry that even when the data is accurate has more than its share of critics and is deserving of skepticism. The CNNMoney/QS fiasco is about as bad as it gets for rankings."


I am afraid that it gets very much worse for QS. They have made errors as bad as this in the compilation of the THES-QS World University rankings -- a response rate of less than 1 per cent to an online survey, counting ethnic minority students in Malaysia as international students, renaming Peking University Beijing University, boosting Duke University's score for student-faculty ratio by counting undergraduates as faculty and so on.

But nobody seems to mind very much when it comes to the THES rankings. Is it something about the brand name?

The post concludes with a very appropriate comment:

"When accurate, unlike the removed QS/CNNMoney version, they are sources of information. Sometimes valuable information. Databanks. I use the data, and so should you. If you want to know the average salaries of graduates from particular schools or their average entering test scores, the rankings will have that information compiled in one place. Like a library, they are sources of information. They are not an excuse for decision-making; using them mindlessly could be the equivalent of a lobotomy. And an expensive one at that."
Best Value Colleges

The Princeton Review (registration required) has published a list of the best value colleges in the US.



Here is what they say about their methodology:

"We chose the schools that appear on our Top Ten Best Value Public and Private Colleges ranking lists based on institutional data we collected from more than 650 schools during the 2005-2006 academic year and our surveys of students attending them. Broadly speaking, the factors we weighed covered undergraduate academics, costs, and financial aid.

More specifically, academic factors included the quality of students the schools attracted, as measured by admissions credentials, as well as how students rated their academic experiences. Cost considerations were tuition, room and board, and required fees.

Financial aid factors included the average gift aid (grants, scholarships, or free money) awarded to students, the average percentage of financial need met for students who demonstrated need, the percentage of students with financial need whose need was fully met by the school, the percentage of graduating students who took out loans to pay for school, and the average debt of those students. We also took into consideration how satisfied students were with the financial aid packages they received."



There are a few questions that should be asked about the methodology, especially concerning the student surveys, but this approach may be more useful for undergraduate students than that of the THES-QS and Shanghai Jiao Tong rankings.

The top 10 best value private colleges for undergraduates are:

1. Rice University
2. Williams College
3. Grinell College
4. Swarthmore College
5. Thomas Aquinas college
6. Wabash College
7. Whitman College
8. Amherst College
9. Scripps College
10. Harvard College


The top 10 best value public colleges are:

1. New College of Florida
2. Truman State University
3. University of North Carolina at Asheville
4. University of Virginia
5. University of California at Berkeley
6. University of California at San Diego
7. University of California at Santa Cruz
8. University of Minnesota, NMorris
9. University of Wisconson-Madison
10. St. Mary's College of Maryland

Thursday, May 03, 2007

‘again!?’ Yep... Quacquarelli Symonds Ltd (QS) did it again.


Eric Beeekens at Bog,u +S has written some excellent posts on the internationalization of higher education.

A recent one concerns QS Quacquarelli Symonds Ltd (QS) who were responsible for collecting data for a ranking of business schools by Fortune magazine. It seems that QS committed a major blunder by leaving out the Kenan-Flagler School at the University of North Carolina at Chapel Hill, one of the top American business schools and one that regularly appears among the high fliers in other business school rankings. Apparently QS got mixed up with North Carolina State University’s College of Management. They also left out the Boston University School of Business. Beerkens refers to an article in the Economist (subscription required) and remarks:

“After reading the first line, I thought: 'again!?' Yep... Quacquarelli Symonds Ltd (QS) did it again.”

Beerkens then points out that this is not the first time that QS has produced flawed research, referring – for which many thanks – to this blog and others. He concludes:

“It's rather disappointing that reputable publications like THES and Forbes use the services of companies like QS. QS clearly doesn't have any clue about the global academic market and has no understanding of the impact that their rankings are having throughout the world. There has been a lot of critique about the indicators that they use, but at least we can see these indicators. It are the mistakes and the biases that are behind the indicators that make it unacceptable!”


There was a vigorous response from the University of North Carolina. They pointed out that QS had admitted to not contacting the university about the rankings, using outdated information and getting the University of North Carolina mixed up with North Carolina State University. QS did not employ any proper procedures for verification and validation, apparently failed to check with other rankings, gave wrong or outdated information about salaries and provided data from 2004 0r 2005 although claiming that these referred to 2006.

Fortune has done the appropriate and honest, although probably expensive, thing and removed the rankings from its website.

What is remarkable about this is the contrast between Fortune and the THES All of the errors committed by QS with regard to the Fortune rankings are parallelled in the World University Rankings. They have, for example grossly inflated the scores of Ecole Normale Superieure in Paris in 2004 and Ecole Polytechnique in 2005 by counting part-time faculty as full time, and done the same for Duke University – QS does seem to have bad luck in North Carolina, doesn’t it? -- in 2005 by counting undergraduate students as faculty and in 2006 by counting faculty twice, used a database from a Singapore based academic publishing company that specializes in Asia-Pacific publications to produce a survey to represent world academic opinion, conducted a survey with an apparent response rate of less than one per cent and got the names of universities wrong – Beijing University and the Official University of California among others.

It is probably unrealistic for THES to remove the rankings from its website. Still, they could at the very least start looking around for another consultant.
Book Review

This is a draft of a review that may appear shortly in an academic journal.

Guide to the World’s Top Universities, John O’Leary, Nunzio Quacquarelli and Martin Ince. QS Quacquarelli Symonds Ltd.: London. 2006.


The THES (Times Higher Education Supplement)-QS World University Rankings have aroused massive interest throughout the world of higher education, nowhere more so than in East and Southeast Asia. Very few university teachers and administrators in the region can be unaware of the apparent dramatic collapse of quality at Universiti Malaya, which was in fact nothing of the sort. That this resulted from nothing more than an error by THES’s consultants and its belated correction has done little to diminish public fascination.

Now, QS Quacquarelli Symonds, the consultants who compiled the data for the rankings, have published a large 512-page volume. The book, written by John O’Leary and Martin Ince of THES and Nunzio Quacquarelli of QS, comes with impressive endorsements. It is published in association with IELTS, TOEFL and ETS, names that quite a few Asian students and teachers will know, and is distributed by Blackwell Publishing of Oxford. At the top of the front cover, there is a quotation from Tim Rogers, former Head of Student Recruitment and Admissions, London School of Economics: “A must – have book for anyone seeking a quality university education at home and abroad.” Tim Rogers, by the way, has been a consultant for QS.

The Guide to the World’s Top Universities certainly contains a large amount of material. There are thirteen chapters as follows.

Welcome to the world’s first top university guide
Ranking the world’s universities
How to choose a university and course
The benefits of studying abroad
What career? Benefits of a top degree
Tips for applying to university
What parents need to know -- guide to study costs and more
Financing and scholarships
The world’s top 200 universities. This is the ranking that was published last year in the THES.
The world’s top universities by subject. This was also published in the THES.
The top 100 university profiles. This provides two pages of information about each university.
12. The top ten countries
13. Directory of over 500 top world universities.

Basically, there are two parts. The earlier chapters mostly consist of advice that is generally interesting, well written and sensible. Later, we have data about various characteristics of the universities, often ranking them in order. The latter comprise much of the book. The profiles of the top 100 universities take up 200 pages and the directory of 500 plus universities another 140.

So, is this a must-have book? At ₤19.99, $35.95 or Euro 28.50 the answer has to be not really. Maybe it would be a good idea to glance through the earlier advisory chapters but as a source of information and evaluation it is not worth the money. First of all, there are serious problems with the information presented in the rankings, the profiles and the directory. The book’s credibility is undermined by a succession of errors, indicating an unacceptable degree of carelessness. At 35 dollars or 20 pounds we surely have the right to expect something a little better, especially from the producers of what is supposed to be “the gold standard” of university rankings.

Thus we find that the Technical University of Munich appears twice in the profiles in positions 82 (page 283) and 98 (Page313). The latter should be the University of Munich. In the directory the University of Munich is provided with an address in Dortmund (page 407). The Technical University of Helsinki is listed twice in the directory (pages 388 and 389). A number of Swiss universities are located in Sweden (pages 462 and 463). The authors cannot decide whether there is only one Indian Institute of Technology and one Indian Institute of Management (page 416) or several (pages 231 and 253). New Zealand is spelt ‘New Zeland’ (page 441). The profile for Harvard repeats the same information in the factfile under two different headings (page 119). There is something called the ‘Official University of California, Riverside’ on page 483. Kyungpook National University in Korea has a student faculty ratio of zero (page 452). Something that is particularly irritating is that the authors or their assistants still cannot get the names of Malaysian universities right. So we find ‘University Putra Malaysia’ on page 435 and ‘University Sains Malaysia’ on page 436. After that famous blunder about Universiti Malaya’s international students and faculty one would expect the authors to be a bit more careful.

Still, we must give some credit. At least the book has at last started to use the right name for China’s best or second best university – Peking University, not Beijing University -- and ‘University of Kebangsaan Malaysia’ in the 2006 rankings in the THES has now been corrected to ‘Universiti Kebangsaan Malaysia’.

The Guide really gets confusing, to put it mildly, when it comes to the number of students and faculty. A perceptive observer will note that the data for student-faculty ratio in the top 200 rankings reproduced in chapter 9 is completely different from those in the profiles in chapter 11 and the directory in chapter 13.

For example, in the rankings Duke University, in North Carolina, is given a score of 100, indicating the best student faculty ratio. Going to QS’s topuniversities website we find that Duke supposedly has 11,106 students and 3,192 faculty, representing a ratio of 3.48 students per faculty. But then we turn to the profile and see that Duke is assigned a ratio of 16.7 students per faculty (page 143). On the same page we are told that Duke has 6,301 undergraduates and 4,805 postgraduates and “just under 1,600 faculty”. That makes a ratio of about 6.94. So, Duke has 3.48 or 6.94 or 16.7 students per faculty. Not very helpful.

Looking at Yale University, the book tells us on the same page (127) that the student faculty ratio is 34.3 and that the university has “around 10,000 students” and 3,333 faculty, a ratio of 3 students for each faculty member.

On page 209 we are told that the University of Auckland has a student–faculty ratio of 13.5 and in the adjacent column that it has 2,000 academic staff and 41, 209 students, a ratio of 20.6. Meanwhile, the top 200 rankings give it a faculty student score of 38 which works out at a ratio of 9.2. So, take your pick from 9.2, 13.5 and 20.6.

The data for research expertise is also contradictory. Universities in Australia and China get excellent scores for the “peer review” of best research in the rankings of the top 200 universities in chapter 9 but get relatively poor scores for research impact. The less glamorous American universities like Boston and Pittsburgh get comparatively low scores for peer review of research but actually do excellent research.

Errors and contradictions like these seriously diminish the book’s value as a source of information.

It would not be a good idea to buy this book although it might be worth looking at the early chapters if you can borrow it from a library. To judge the overall global status of a university, the best bet would be to look east and turn to at the Shanghai Jiao Tong University Index, available on the Internet, which ranks the top 500 universities. This index focuses entirely on research but there is usually at least a modest relationship between research activity and other variables such as the quality if the undergraduate student intake and teaching performance. Those thinking about going to the US should look at the US News and World Report ‘s America’s Best Colleges. Anyone concerned about costs – who isn’t? -- should look at Kiplinger’s Index, which calculates the value for money of American universities. Incidentally, the fifth place here goes to the State University of New York at Binghamton, which is not even mentioned in the Guide. The Times (which is not the same as the Times Higher Education Supplement) and Guardian rankings are good for British universities.

Students who are not certain about going abroad or who are thinking about going to a less well known local institution could try doing a Google Scholar search for evidence of research proficiency and a Yahoo search for miscellaneous activity. Whatever you do, it is not a good idea to rely on any one source alone and certainly not this one.

Friday, April 27, 2007

How Rankings Lead to a Decline in Quality

Geoffrey Alderman, currently a visiting research fellow at the Institute of Historical Research, University of London, has an article in the Guardian about the decline of standards in British universities. He refers to a case at Bournemouth University where officials overrode a decision by a professor and examination board to fail thirteen students. Apparently, the officials thought it unreasonable that students were required to do any reading to pass the course. He also comments on the remarkable increase in the number of first-class degrees at the University of Liverpool. Professor Alderman is clear that part of the problem is with the current obsession with rankings:


"Part of the answer lies in the league-table culture that now permeates the sector. The more firsts and upper seconds a university awards, the higher its ranking is likely to be. So each university looks closely at the grading criteria used by its league-table near rivals, and if they are found to be using more lenient grading schemes, the argument is put about that "peer" institutions must do the same. The upholding of academic standards is thus replaced by a grotesque "bidding" game, in which standards are inevitably sacrificed on the alter of public image - as reflected in newspaper rankings."

Similarly, it seems that in the US large numbers of students are being pushed through universities for no other reason than to improve graduation rates and therefore scores on the US News and World Report rankings.

Tuesday, April 24, 2007

Comparison of the THES-QS "Peer Review" and Citations per Faculty Scores

QS Quacquarelli Symonds, the consultants responsible for the THES-QS World University Rankings, have now placed data for 540 universities, complete with scores for the various components, on their topuniversities website (registration required). This reveals more dramatically than before the disparity between scores by some universities on the “peer review” and scores for citations per faculty, a measure of research quality. Below are the top 20 universities in the world according to the THES-QS “peer review” by research-active academics who were asked to select the universities that are best for research. In curved brackets to the right is the position of the universities in the 2006 rankings according to the number of citations per faculty.

Notice that some universities, including Sydney, Melbourne, Australian National University and the National University of Singapore perform dramatically better on the peer review than on citations per faculty. Melbourne, rated the tenth best university in the world for research by the THES-QS peer reviewers, is 189th for citations per faculty while the National University of Singapore, in twelfth place for the peer review, comes in at 170th for citations per faculty. The most devastating disparity is for Peking University, 11th on the “peer review” and 352nd for citations, behind, among others, Catania, Brunel, Sao Paulo, Strathclyde and Jyväskylä. Once again, this raises the question of how universities whose research is regarded so lightly by other researchers could be voted among the best for research. Oxford, Cambridge and Imperial College London are substantially overrated by the peer review. Kyoto is somewhat overrated while the American universities, with the exception of Chicago, have roughly the same place that would be indicated by the citations per faculty position.

Of course, part of the problem could be with the citations per faculty. I am fairly confident that the data for citations, which is collected by Evidence Ltd, is accurate but less certain about the number of faculty. I have noted already that if a university increases its score for student faculty ratio by increasing the number of faculty it would suffer a drop in the citations per faculty score. For most universities the trade-off would be worth it since the gap between the very good and the good is much greater for citations than for student-faculty ratio. So, if there has been some inflating of the number of faculty, however and by whom it was done, than this would have an adverse impact on the figures for citations per faculty.

I have therefore included the positions of these universities according to their score for articles in the Science Citation Index-expanded and Social Science Citation Index in 2005 in the Shanghai Jiao Tong rankings. This is not the same as the THES measure. It covers one year only and is based on the number of papers, not number of citations. It therefore measures overall research output of a certain minimum quality rather than the impact of that research on other researchers. The position according to this index is indicated in square brackets.

We can see that Cambridge and Oxford do not do as badly as they did on citations per faculty. Perhaps they produced research characterised by quantity more than quality or perhaps the difference is a result of inflated faculty numbers. Similarly the performance of Peking, National University of Singapore, Melbourne and Sydney is not as mediocre on this measure as it is THES’s citations per faculty.

Nonetheless the disparity still persists. Oxford, Cambridge, Imperial College and universities in Asia and Australia are still overrated by the THES-QS review.

1. Cambridge (46 ) [15]
2. Oxford (63) [17]
3. Harvard (2) [1]
4. Berkeley ( 7) [9]
5. Stanford (3) [10]
6. MIT (4) [29}
7. Yale (20) [27]
8. Australian National University (83) [125]
9. Tokyo (15) [2]
10. Melbourne (189) [52]
11. Peking (352) [50]
12. National University of Singapore (170) [111]
13. Princeton (10) [96]
14. Imperial College London ( 95) [23]
15. Sydney (171) [46}
16. Toronto (18) [3]
17. Kyoto (42) [8]
18. Cornell (16)
19. UCLA (19) [21]
20. Chicago (47) [55]

Monday, April 23, 2007

Book Review in the THES

A new book on university rankings, The World-Class University and Ranking: Aiming Beyond Status, has appeared and has been reviewed by Martin Ince in the Times Higher Education Suppplement (THES). It is edited by Liu Nian Cai of Shanghai Jiao Tong University and Jan Sadlak. You can read a review here (subscription required) I hope eventually to review the book myself.

I must admit to being rather amused by one comment by Ince. He says:

"Although one of its editors is the director of the Shanghai rankings, The World-Class University and Ranking largely reflects university concerns at being ranked. Many contributors regard ranking as an unwelcome new pressure on academics and institutions. Much is made of the "Berlin principles" for ranking, a verbose and pompous 16-point compilation that includes such tips as "choose indicators according to their relevance and validity". The Shanghai rankings themselves fall at the third principle, the need to recognise diversity, because they rank the world's universities almost exclusively on science research. But the principles are silent on the most important point they should have contained - the need for rankings to be independent and not be produced by universities or education ministries."

I would not argue about the desirability of rankings being independent of university or government bureaucracies but there is far greater danger in rankings that are dominated by the commercial interests of newspapers.

Friday, April 20, 2007

Rankings to be Investigated

The Guardian has announced that the Higher Education Funding Council for England (Hefce) will investigate university league tables, Its chief executive David Eastwood has announced that the council will examine the rankings produced by the Guardian, the Times and the Sunday Times and whether university policies are influenced by attempts to improve their scores.

The report continues:

'World tables compiled by Shanghai Jiao Tong University and the Times Higher Education Supplement will also be surveyed. The University of Manchester, for example, has made it clear that its strategy is to climb the international rankings, which include factors like the number of Nobel prizewinners. The university has pledged to recruit five Nobel laureates in the next few years.

Prof Eastwood said league tables were now part of the higher education landscape "as one of a number of sources of information available to potential students".

He added: "Hefce has an interest in the availability of high quality useful information for students and the sector's other stakeholders. The league table methodologies are already the subject of debate and academic comment. We plan to commission up-to-date research to explore higher education league table and ranking methodologies and the underlying data, with the intention of stimulating informed discussion.'


Thursday, April 19, 2007

More about Internationalisation

There is a very interesting piece by Ahmad Ismail at highnetworth -- acknowledgement to Education in Malaysia -- that argues that the economic value of an overseas university education for a Malaysian student is minimal. There are, no doubt, going to be questions about the assumptions behind the financial calculations and there are of course other reasons for studying abroad.


Even so, if students themselves typically gain little or nothing economically from studying in another country and if their parents suffer a great deal and if students or taxpayers in the host country in one way or another have to pick up the tap (at Cornell it takes USD1,700 to recruit an international student) then one wonders what internationalisation has to do with university quality. And one wonders why THES and QS consider it so important
How Rankings Produce Distortions

A letter to the Cornell Daily Sun from Mao Ye, a student-elected trustee, suggests increasing the recruitment of international students in order to boost the university's position in the US News and World Report rankings.

The question arises of whether the intenational students would add anything to the quality of an institution. If they do, then surely they would be recruited anyway. But if students are admitted for no other reason than to boost scores in the rankings they may well contribute to a decline in the overall quality of the students.


"Two critical ways to improve Cornell’s ranking are to increase the number of applications and to increase the yield rate of admitted students. To achieve this goal, no one can overlook the fact that international applications to all U.S. institutions have recently increased at a very fast pace. For Cornell, the applications from China increased by 42.9 percent in 2005 and 47.5 percent in 2006. We also saw a 40 percent increase in applications from India last year. By my estimation, if international applications continue to grow at the current rate, in 10 years there will be more than 10,000 foreign applications received by the Cornell admissions office. Therefore, good performance in the international market will have a significant positive impact on our ranking in U.S. News and World Report.

How might we get more international students to apply? It’s actually very easy. We can have different versions of application materials, each in various students’ native languages, highlighting Cornell’s achievements in that country and addressing the specific concerns of students from that country. I checked the price and realized we do not need more than $500 to translate the whole application package into Chinese. If we focus translation on the crucial information for Chinese applicants, the cost is as low as $50. Comparatively, this is lower than the cost of recruiting one undergraduate student to a university, which costs an average of $1,700 per student, based on the calculations of Prof. Ronald Ehrenburg, industrial and labor relations. Staff, students, parents and Cornell as a whole will all benefit greatly from this plan."