Rankings deserve more scrutiny

This week, the release of the U.S. News and World Report’s 2014 Best Colleges list saw Duke ranked seventh among national universities, up one from last year. The new rankings will inform the application decisions of high school students around the world and shape college administrators’ “to-do” list in the coming months.

As a Board, we have long been critical of college rankings—and U.S. News’ in particular—citing random year-to-year fluctuations in rankings and superficial rating criteria as serious deficiencies. Although the U.S. News rating criteria changed this year by placing less emphasis on peer assessment and more weight on average SAT scores of admitted students, it is difficult to judge the value of any individual change, and we believe that our previous criticisms still hold true. Even if the intrinsic value of college rankings is close to zero, their instrumental value is substantial enough to warrant further examination of their merits and drawbacks.

There are at least three ways to judge the quality of college rankings. We can assess their methodology, presentation and criteria. Methodology refers to the means by which certain variables are measured. For example, whether the quality of Duke’s Writing 101 program is determined by Duke professors or the faculty at one of our peer schools. Presentation encompasses how information is packaged for applicants’ consumption. Precise numerical rankings, for example, put excessive weight on sudden movements, like Duke’s leap from fifth to eighth between 2005 and 2006. Although these are important factors, rankings criteria are especially important, both to applicants and current students.

First, criteria shape how applicants think about what a college education should offer them. The emphasis that the U.S. News rankings place on SAT scores, for example, implies that colleges should be judged in large part by the quality of their “inputs.” The ranking system proposed by President Obama, on the other hand, in part emphasizes the earnings of graduates—a metric that assesses the quality of “outputs.” This year’s Forbes rankings weighed student satisfaction—a measure of on-campus quality—higher than the others.

Clearly there is disagreement about how higher education should be valued, and we don’t claim to have the answer. It is without question, however, that ranking criteria bring with them implicit normative claims about the value of college, and these claims shape applicant behavior in real ways.

Second, there is evidence that university administrators alter their schools’ policies to meet specific ranking criteria. Although it is an extreme example, Claremont McKenna College went as far as to manipulate SAT scores to boost its rank. In yesterday’s Daily Pennsylvanian, The University of Pennsylvania’s Dean of Admissions Eric Furda admitted that the rankings “do keep me up at night.”

On the other hand, Michael Schoenfeld, vice president for public affairs and government relations, called the rankings “fleeting and incomplete,” while observing that Duke should be near the top. Especially at a relatively young school like Duke, it is important to ask if and how concerns about rankings shape the University’s strategy.

Discussion

Share and discuss “Rankings deserve more scrutiny” on social media.