Robert Morse, director of data research at U.S. News and World Report, has led the magazine’s signature education rankings soon after their inception in 1983. He now manages the entire higher education rankings, including those for national universities, liberal arts colleges and graduate and professional schools. The Chronicle’s Yeshwanth Kandimalla spoke with Morse about his work, the importance of the rankings today and Duke’s place over the years.
The Chronicle: How did the rankings originate, and how did you become involved in developing and compiling them?
Robert Morse: They originated in the early 1980s from a simplistic survey. I didn’t work on them for the very first year or two years. Doing them originally was not my idea. When we were a newsweekly magazine, we thought of doing a small feature, using a very simplistic methodology. From the first year in 1983, it evolved from a simplistic methodology to becoming the more sophisticated one we have right now. It has taken a life of its own way what we originally envisioned.
TC: If you were a senior applying to college, how would you use the rankings?
RM: If I were a senior, I would use them depending on how much I achieved in high school and my academic profile.... Let’s assume that I wasn’t a top performing student with only passing interest in the rankings... The scope of where I go may be limited to regional universities, and it would be smart to look at the data and get some conception. But I probably wouldn’t be that focused on the rankings. If I had really top credentials, I’d probably care about the rankings. But I’d want them to be just one factor. Hopefully I wouldn’t make my decisions just based on the rankings.
TC: What is the most significant shortcoming in the rankings methodology?
RM: The most significant shortcoming is that what we can’t measure is important. Duke is a very complex institution. The rankings can only measure a few things. They simplify a school into one number. Those variables are what’s available. That it simplifies complex institutions into a number is certainly the biggest shortcoming. I don’t think the public understands false precision because schools that are close in their rankings aren’t that different.
TC: Have the rankings contributed to increased application numbers at certain schools?
RM: There have been studies that have shown that, to a limited degree, they do. Other events at certain schools have more of an impact. When a school—some schools that aren’t well known—do well in basketball, such as George Mason University, they are not highly ranked in our rankings, and they get a surge of applications. When schools go from not using the [Common Application] to using it, they see a surge because it’s easier for students to apply. I think there are external factors, but in some cases it can have an impact.
TC: Do these surges in applications increase a college’s selectivity and boost its ranking?
RM: No, the acceptance rate counts only 1.5 percent in the final score. You would have to see a 30 to 40 percent change in the acceptance rate to change the rankings. I’ll pick on George Mason or Butler University. If the applicant pool changed in the sense that their average SAT and high school class standing changed, then the ranking would change. If it’s more of the same type of applicant, then it wouldn’t really change the rankings.
TC: Does the magazine receive a big revenue boost from the rankings? If so, how much?
RM: It’s certainly the most visible product that U.S. News does. A vast percent of people are seeing them for free online, so the revenue is not from people paying to see the national university rankings. It’s the companies or people or schools that are buying ads—there is a surge in sponsorships and banner ads online. It gets half the web traffic. It’s the company’s biggest brand—it’s what people most associate U.S. News with.
TC: Is there a qualitative explanation for Duke’s pattern in the rankings?
RM: The fact that Duke has been able to stay in a relatively narrow range of the top-10 schools shows its consistency over the years. Certainly the third place [in 1997] is a peak, but I’m not sure why. It’s hard to make the top 10, from a statistical perspective. When your numbers are that high with retention or your classes sizes are so small, it’s hard to have that much higher of a reputation. Thus, schools in the top 10 maintain it, and it’s very difficult for a school in the 20th range to break into the higher ranks.
TC: What is the vetting process for data you receive from schools? Has it ever been adjusted after major revelations of misleading data provided by the school—most recently at Emory University?
RM: We have a vetting process, but with Emory or Claremont McKenna [College], they did it in the way that couldn’t be cross-checked. They were smart enough that they had to report the same falsified data to multiple agencies. U.S. News checked against the federal government website, and we check a lot of the data external resources. [But] we don’t have the reach to audit. It would be impossible to cross-check if you found the same falsified data.... We’re hoping that people get to a point where they decide that it’s not worth it.
TC: Is there a difference in reporting requirements between public and private institutions?
RM: There are lot more reporting requirements for public schools. There are many reports that they have to file for the state. At the federal level, if you received Pell Grants, you have to do mandatory filings. The private schools don’t have to do that.... [The University of Illinois College of Law], which is a public school, was falsifying data. The recent ones related the U.S. News rankings, such as Emory, Claremont McKenna and Iona University, have been all private. In general, institutions are betraying themselves if they preach high ethical standards. But there are individuals in the process, and they—for whatever reason— may not have those standards. Some of this is human weakness.
TC: Are there discernible differences in student experience between schools that are close together in the rankings. such as Duke compared to the schools above it in the top 10?
RM: There are only slight differences in schools in the top 10. There are meaningful differences between a school that’s ranked 10 and 25. If you’re getting between 10 and 15 places, you’re getting a meaningful difference.
There is a more recent phenomenon, with the surge of Chinese students coming to the United States. Rankings matter in East Asian countries. They really care. We’re actually more popular as a product in China because of the importance of rankings. If you’re an international student, you’re paying full and probably not getting any aid. You want to make sure you’re going to a good school for that money.