Broken ranks

U.S. News rankings undervalue crucial criteria.

By Taylor Schwimmer

Hundreds of incoming first years undoubtedly felt a strong sense of vindication when they checked the 2012 U.S. News “National University Rankings” last month. The University of Chicago had ascended to the lofty rank of #4 from last year’s five-way tie for fifth place. Prima facie, this is a fairly significant development; the only schools that are “better” than us are Harvard, Princeton, and Yale. The rankings seem to suggest that we are not only as “good” as Ivy League schools, but that we are “better” than most. This is an accomplishment, and our professors and administrators deserve to be commended for achieving it.

However, the stridently academic tradition of our (now highly-ranked) school demands that we must look beyond the face of the matter and answer several pressing questions. What exactly does the ranking represent, and what does our position on the list mean for the University as a whole?

It makes sense to first look at the criteria on which the ranking is based. U.S. News has conveniently provided a four-page document outlining the methodology behind the list. The process involves gathering data on a number of different “indicators” from universities and creating a composite score with which colleges can be ranked. There are seven indicators listed on the U.S. News website, but the article also notes that up to 16 indicators can factor into the decision.

The most striking thing about the rankings is that the single most heavily weighted category is “Undergraduate Academic Reputation.” That category, which accounts for 22.5% of an institution’s score, relies on surveys completed by other university administrators and college guidance counselors. This poses several problems. Perhaps most obvious is that those surveyed have no direct experience with the universities they’re ranking. Certainly, these people may be familiar with their apparent reputations, but they simply do not have the benefit of knowing them as students, professors, or community members. It seems farcical to think that, for instance, the dean of admissions at Stanford University sufficiently understands the necessary context of, or even has the opportunity to deeply ponder, the pros and cons of the U of C’s common core, much less reach an educated verdict based on them. I’m certain that the dean is extremely intelligent and capable, but the fact that he lives and works 2,000 miles away and quite possibly has never been in a University of Chicago classroom is a significant impediment to his ability to render fair judgment. It’s something analogous to watching the two-minute trailer of a movie and then writing a full review. You probably have a general idea of the plot and characters, and maybe even a hint of the style of the film, but you cannot possibly comment intelligently on the entire feature-length film.

There are a number of other issues involving the “reputation” metric. The article mentions that the survey system allows “top academics—presidents, provosts, and deans of admissions—to account for intangibles.” I find this statement somewhat mystifying: Since when are university administrators “top academics?” Certainly they are highly talented and accomplished individuals, and some do have pure academic experience, but wouldn’t it make more sense for professors, who teach and do research daily, to pass judgment on the intangibles of an academic institution? An internal survey of teaching staff and faculty would almost certainly provide more valuable and helpfully introspective results.

A bit further down the list comes “selectivity,” which is composed of the average SAT and ACT scores and class rankings of admitted students, as well as the acceptance rate. Though these attributes are widely accepted as an accurate way to gauge the quality of the student body at a university, they also have the unintended consequence of motivating universities to change their behavior. For instance, let us consider acceptance rate. Though only comprising 1.5% of the total raw score, the metric, like all others, is still immensely important—the difference between ranks is often only a few points. Indeed, acceptance rate in and of itself is likely more directly relevant to a school’s rank than to its actual quality. The University of Chicago has been hugely successful in lowering its acceptance rate as of late, and this has certainly increased our rank. However, there is something slightly unsettling about a system that claims to objectively score the merits of an institution through a metric that is not only of otherwise questionable importance, but also easily and actively manipulated.

The final few categories include financial resources and alumni giving rate. Again, these are not illogical measures by which to judge a school, but they suffer from a problem that they share with acceptance rate, as noted above, as well as many other indicators and the ranking as a whole: They do not really measure the salient qualities of the school itself. Instead, the scores are simply a composite of external data points that are often merely tangential and always prone to manipulation. The rankings do not measure the academic quality of the coursework, the intelligence and character of the student body or even the career prospects of outgoing students, which of course are all central to the strength of the University. Again, the movie trailer metaphor is apt.

I am not arguing that the rankings are completely worthless. I believe they help capture some general sentiment about the prestige of various institutions and I am of course happy about our recent improvement. However, I urge you to reconsider the importance of these rankings. We must be cognizant of U.S. News’s status as a for-profit private corporation that has an ultimate goal of selling guidebooks and making money. To me, it is abundantly clear that while the U.S. News rankings measure many things, the true academic quality of an institution is not one of them. In fact, I would posit that the most important indicator of a university’s strength is the collective work ethic and dedication of its students and faculty—something that can certainly not be counted.

Taylor Schwimmer is a third-year in the College majoring in public policy studies. Follow him on Twitter: @schwimmert.