Scamming the US News College Rankings Scam
Stephen Budiansky, who worked at US News from 1986 to 1998, discusses the scam of the magazine’s college rankings and the various ways in which colleges scam said scam rankings.
To increase selectivity (one of the statistics that go into U.S. News’s secret mumbo-jumbo formula to produce an overall ranking), many colleges deliberately encourage applications from students who don’t have a prayer of getting in. To increase average SAT scores, colleges offer huge scholarships to un-needy but high scoring applicants to lure them to attend their institution. (The Times story mentioned that other colleges have been offering payments to admitted students to retake the test to increase the school average.)
One of my favorite bits of absurdity was what a friend on the faculty at Case Law School told me they were doing a few years ago: because one of the U.S. News data points was the percentage of graduates employed in their field, the law school simply hired any recent graduate who could not get a job at a law firm and put him to work in the library.
Their other tactic was pure genius: the law school hired as adjunct professors local alumni who already had lucrative careers (thereby increasing the faculty-student ratio, a key U.S. News statistic used in determining ranking), paid them exorbitant salaries they did not need (thereby increasing average faculty salary, another U.S. News data point), then made it understood that since they did not really need all that money they were expected to donate it all back to the school (thereby increasing the alumni giving rate, another U.S. News data point): three birds with one stone!
As someone who knew a little math, what really drove me bonkers about the college guide was:
(a) the logical absurdity of adding together completely unrelated statistics to produce a single measure of merit—the key point being that you can produce an astonishing range of different results depending on the relative weight each component factor is assigned. And there is simply no logical, a priori basis for establishing such a weighting objectively. Do SAT scores count 30% of the total score? 32.2%? 18.78234%? (How about zero?) It’s the classic apples + oranges – bananas/kumquats = fruit salad approach to statistics, and is completely meaningless.
(b) the fact that the entire exercise was designed to emphasize noise over signal: tiny, random fluctuations from year to year would result in regular changes in the final rankings. Even within its own absurd methodology no one ever dared broach the question of the actual statistical significance of the differences between the “No. 1″ school and say the No. 5 school. In fact, there was pretty clearly none. It is of course ridiculous to think that when Harvard, Stanford, Yale, whoever changed places from one year to the next in the final rankings this reflected any actual sudden change in the underlying quality of the schools. But the only way to keep selling the damned guide each year was to make sure things kept changing from year to year.
What’s amazing is not that the magazine used this gimmick to increase sales or that some try to game the system but that almost all the colleges and universities in America willingly went along with it. I can sort of understand why a 2nd or 3rd tier institution would tout its rankings in some tertiary category (Best value of any medium sized liberal arts college in the Southwest!) as a means of claiming prestige. But what did Harvard, Yale, Princeton, Stanford, and the like have to gain by playing along?