Selectivity Doesn’t Equal Quality

University admission metrics do not reflect quality of life and education at any given school, giving applicants an incomplete picture.

By Natalie Denby

If you had told me a few months ago that the entire nation would spend its spring debating the ethics of the Full House and Desperate Housewives casts, I would’ve thought you were insane. But in the wake of the college admissions scandal, Lori Loughlin and Felicity Huffman’s faces have become utterly inescapable. As much as you can’t help but be entertained by the salacious details, it’s just as difficult to wrap your mind around the underlying point: the extraordinary measures folks will take to get their kid admitted into USC.

But if the sophistication of the scheme is stunning, and the amount paid in bribes mind-boggling, the fact that higher education has shady or outright criminal backdoors is not surprising at all.  Top tier colleges have always been “selective,” but in recent years that selectivity has morphed into something that looks like Powerball odds. UChicago’s recently announced admissions rate, in the 5 percent range, isn’t uniqueHarvard’s, for instance, is lower stilland colleges in the top echelon have driven their admission rates down tremendously.

Current students tend to view those numbers with a kick of pride. Nothing quite says exclusivity like knowing there are 19 rejected candidates for each admission. But the admissions process isn’t healthy, and neither is the ranking system that motivates it. An excellent Maroon column by Ruby Rorty recently explored economic injustice and U.S. News and World Report’s rankings. Those are extraordinarily weighty issues. On top of our failure to account for economic fairness in college rankings, I’d add that as long as we think of a college’s prestige in terms of selectivity, we’re not going to have the tools to adequately measure quality.

The race to become the most selective institution is driven by a deeply rooted sense that “selective” means “really, really good”: a school that’s an especially picky eater must be excellent. You can spot this in how the most popular college rankings site (U.S. News) uses selectivity, in the form of average test scores and high school class standing, to drive 10 percent of its rankings, even after recently announcing that it would drop admissions rates from its methodology. Our personal experiences reflect an inclination to treasure selectivity, too.  We use the phrase “highly selective” interchangeably with “top tier.” If you were awake even momentarily for your own application process, you might have found yourself deep down some rabbit hole, comparison-shopping schools’ average SAT scores on subject tests you didn’t even take.

Our inclination to tie up prestige with selectivity is understandable, in the sense that selective schools are highly coveted, and coveted by highly successful students (and their Ivy League–minded parents). That market demand certainly reflects something valuable. But as quality indicators, it’s just a proxy for other traits, a preliminary sign that there’s something about a school worth pursuing, like a robust alumni network or excellent teachers.

The issue is that many of those fundamental, desirable traits are either missing or only partially reflected in our rankings, whether formal (think U.S. News again) or casual (that distant cousin who can’t stop reminding you of all the Ivy League schools he thinks are better than yours).  Granted, ranking systems build in information about graduation rates, retention, faculty compensation, and class size, and they have peer assessment surveys to suss out academic reputation.

But these aren’t the best ways to evaluate a school. They don’t tell you how good the teachers are, what sort of doors a degree will open, or what an alumni network can do to help a graduate. More discrete outcome measurements (better salary and career information, for instance, or a more concentrated effort to measure teacher quality) are also missing from these collegiate rankings systemsand more generally, missing from the public view altogether. U.S. News, and others, would probably argue that peer assessment surveys are intended to compensate for this gap. But you can’t simply rely on reputation-based surveys to give you information that respondents don’t have.

What you’re left with is a sense of prestige that doesn’t speak to quality directly. It does, however, give every college in the country an incentive to become all the more selective, even if U.S. News has dropped the admissions rate. That’s reflected in colleges’ mass migration to the Common Application, aggressive targeting of applicants they won’t actually let in, and other moves intended to dramatically increase the applicant pool.  This process may not ease too much simply because U.S. News no longer directly uses admissions rates—selectivity is still 10 percent of rankings, and admissions rates are likely factored into rankings indirectly through academic reputation. More generally, admissions rates are deeply tied to prestige in the public imagination.

Admissions-related gamesmanship doesn’t improve school quality, but it certainly makes life worse for applicants. Students have to apply to more and more schools, which only drives admissions rates down further. As those of us with younger siblings can attest, if the term du jour for the admissions cycle was “pressure cooker” a few years ago, it’s something closer to “pressure cooker chucked into a trash compactor” today.

It’s hard to make the case that this helps anyone. On top of making the application process more stressful, it doesn’t align institutional incentives with student interests. The things schools have done to increase their standing are often unrelated to how valuable they actually are. By over-relying on selectivity as a proxy for quality, we encourage these unhealthy gambits. And we forfeit the chance to pressure colleges at all tiers to make changes that would actually help students, and better inform applicants. That might easily be achieved if colleges were explicitly compared to each other on the basis of more informative traits: how students rank teachers on a uniform survey, how career services vary from school to school, or how satisfied students were with postgraduate placement.

U.S. News was right to drop the admission rate from its methodology. It’s also right to get antsy about a reliance on “inputs” over “outputs.” But colleges might be materially improved if these ranking systems began to incorporate more informative measures of student outcomes and college resources. It’s high time we began to demand better information.


Natalie Denby is a fourth-year in the College.