Last week, The Chronicle put Duke’s jump from eighth to tenth on the U.S. News and World Report’s college rankings on its front page, accompanied by a graph of Duke’s ranking over time, with a ribbon over the year 1997 when its position peaked. Chest-thumping Facebook statuses are not the most scientific metric, but the continued mania surrounding the U.S. News’ annual release both reflects and perpetuates an unhealthy obsession with rankings in higher education.

Although rankings can reflect very broad trends in academic quality, yearly incremental fluctuations are inconsequential. Certainly there would be a difference between a school in the top 20 and a school in the bottom 20 of the list, which ranks almost 200 national universities. But the difference between first and second, eighth and 10th, or 75th and 78th are all essentially meaningless.

The lack of precision leads to a false perception that, in any given year, schools whose rankings have changed become categorically “better” or “worse” than its peers above or below it in the list—misleading applicants who genuinely believe the strict hierarchical order of the U.S. News rankings reflect some vital truth.

The obsession with rankings conditions universities to prioritize improving their number—inculcating a “teach to the test” mentality—rather than pursuing positive educational goals for their own sake. This behavior was taken to an extreme at Claremont McKenna College—where administrators falsified SAT scores to get a bump up in the rankings—indicating just how excessive the rankings-mania has become.

Aside from meaningless and even imagined differences between schools, the U.S. News rankings use almost arbitrary criteria to begin with. Emphasizing financial resources, selectivity or alumni giving rates propose a very particular conception of a “good” university. The controversial peer review criterion—where administrators are asked to rank academic programs at their peer schools from one to five—is painfully reductionist, ignoring the complexity of the modern university. Asking a dean at a different school to rank hundreds of Duke programs and subprograms that she is not intimately familiar with is obviously absurd.

Further, in a higher education landscape increasingly roiled by budgetary and philosophical questions, an obsession rankings can stifle innovation. Sarah Lawrence University, for example, will now accept, but not require, SAT scores in their admissions process after the U.S. News rankings assigned them an average SAT score based on peer institutions. Ignoring the rankings entirely is a luxury no university can afford, especially if they are trying new things.

University rankings have some validity in ordering the higher education landscape, and are useful for prospective students just beginning their college search. But U.S. News and World Report needs to sell issues in a crowded field of news magazines. High schoolers who excessively use the rankings risk choosing schools that have succeeded in realms irrelevant to their unique and subjective criteria for the ideal college. Universities that focus on meeting the magazine’s arbitrary criteria risk losing sight of their fundamental missions. Next year, rather than celebrating another Duke rise or decrying a possible fall in U.S. News’ eyes, we hope everyone steps back and remembers the superficiality of the rankings—and instead challenges our own University to keep getting better.