Op-Ed: The tyranny of college rankings — and why we need to leave them behind
For all those disappointed college applicants whose hopes were pinned on getting into a school highly ranked by U.S. News & World Report or some similar publication, take heart.
This is your chance to be liberated from the tyranny of college rankings.
“Tyranny†is not too strong a word. The people who publish college rankings wrap their products in a seductive veneer of professional expertise and statistical rigor. They express their evaluations in eye-catching numbers, presented in descending order (from 1 to 391, in the case of the U.S. News “national universities†list).
According to that list, UCLA, ranked 20th, is better than USC, ranked 27th. So, it must be true.
Or is it? If you look at the methods used to produce those numbers, you will see that the entire enterprise, like the Emerald City in the Land of Oz, consists mostly of blue smoke and mirrors.
Consider the formulas used by rankers to compute those numbers. Every step in the process — from the selection of variables, the weights assigned to them and the methods for measuring them — is based on essentially arbitrary judgments.
U.S. News’ and other annual rankings are unhelpful and too frequently gamed by colleges. It’s time to give families useful guides
U.S. News, for example, selects 17 metrics for its formula from among hundreds of available choices. Why does it use, say, students’ SAT scores, but not their high school GPAs? Faculty salaries but not faculty teaching quality? Alumni giving, but not alumni earnings? Why does it not include items such as a school’s spending on financial aid or its racial and ethnic diversity?
Likewise, the weights employed to combine those variables into a total score are completely subjective.
U.S. News has somehow concluded that a school’s six-year graduation rate is worth exactly 17.6% of its overall score, but its student-faculty ratio is worth only 1%. To judge a school’s “academic reputation,†it gives a whopping 20% weighting to the opinions of administrators from other colleges, most of whom know very little about the hundreds of schools they are asked to rate — other than where those colleges appeared in the previous year’s ranking. And the publication gives no weight to the opinions of students or graduates.
In addition, different rankers use different ways to measure each variable. Consider graduation rates. Some are based on the percentage of students who earn a degree in six years. Others use an eight-year measure. Some include transfer students, others not. Rankers sometimes measure “student excellence†by matriculants’ average SAT scores or high school GPAs or high school rank-in-class. Others use admissions acceptance rates or yield rates.
Even if you think the rankings formulas make sense, the calculations they rest on are based overwhelmingly on unaudited, unverified data self-reported by the very schools being ranked. Would you invest in a company based on such information?
Throughout their history, the college rankings have been plagued by allegations of fabrication and manipulation of data. In just the past month, USC, Columbia University and Rutgers University have all been accused of submitting “erroneous†or false reports to U.S. News, and a dean at Temple University was sentenced to prison in March for a fraud scheme aimed at boosting the school’s prestige.
Most observers believe that these public revelations represent only the tip of a very large iceberg.
Lurking behind data manipulation lies the even larger problem of schools altering their academic practices in a desperate attempt to gain ranking points. Examples include inflating a school’s fall-semester “class size index†by shifting large introductory lectures into the spring semester. Or boosting its yield rate by expanding early admissions and merit-aid programs that mostly benefit wealthy applicants at the expense of needy applicants. Or improving graduation rates by relaxing academic standards.
Finally, the rankings impose a single formulaic template on hundreds of wonderfully diverse institutions. For example, U.S. News tosses Caltech, Santa Clara University, Chapman University and Fresno State, along with UCLA and USC, into its long list of national universities, as if they were all fungible examples of a uniform product differing only by relative status.
A USC official said there were inaccuracies in data reported by the Rossier School of Education going back at least five years.
In short, the popular “best colleges†rankings try to force America’s colleges and universities into a rigid hierarchy, based on arbitrary formulas, fed by unreliable data.
Instead of relying on someone else’s subjective idea of what one should want in a college, applicants should ask themselves what they want that will serve their personal goals. Do they see college as a means to immerse themselves in a particular field of study? Obtain an impressive pedigree? Qualify for an economically rewarding career? Prepare for service to the community? Seek guidance for a life of meaning and fulfillment? Or something else?
Part of the tyranny of college rankings is their allure of simplicity. They promise to reduce the complexity of college choice to a simple number. But selecting a college is anything but simple. College is one of the most complex “products†one will ever purchase. And those four years constitute a hugely important period of exploration and personal development.
Choosing a college should be approached as an exercise of self-discovery. Getting rejected by a school highly rated by some perfect stranger may be just what it takes to set applicants on that path.
Colin Diver is the former president of Reed College and former dean of the University of Pennsylvania Law School. He is the author of “Breaking Ranks: How the Rankings Industry Rules Higher Education and What to Do about It.â€
More to Read
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.