Obviously, the rankings did not play anything approaching the central role in my choice of Amherst, but I think most students will admit that the rankings certainly didn’t count against the College when making their decisions about which school to attend. When I saw Amherst’s ranking at the top I couldn’t help but think that yeah, I wouldn’t mind telling people (modestly, of course) that I went to a little school ranked as the premiere liberal arts college in the country. To solve my dilemma and determine whether the rankings deserve all the weight that they seem to carry, I turned to a few sources I knew would present less orthodox perspectives: namely, well respected colleges not on the list of rankings.
First and foremost among these schools was Reed College, another of the schools I considered in my choice of colleges. Reading (no pun intended) an official statement of the college’s stance on the rankings, I gathered several things. The brief discourse mentions “disclosures in 1994 by the Wall Street Journal about institutions flagrantly manipulating data in order to move up in the rankings in U.S. News,” as well as the blatant backlash from the magazine when Reed decided to stop participating. This is by far the more compelling of the two arguments. If U.S. News and World Report can move a school so readily in the rankings and for what is clearly so subjective (not to mention petty) a reason, surely the integrity of the rankings is questionable.
Reed’s statement also brings up gaps in the assessment of colleges that determine rankings. Colin Diver, the college president, makes the bold assertion that the rankings “do not attempt, nor claim, to measure the extent to which knowledge is valued and cultivated.” While his critique is certainly valid, the areas in which the assessment is deficient are basically impossible to measure.
I would argue, however, that the assessment is lacking in another, more concrete way as well. Probably the greatest flaw in this process is the assignment of numerical values to certain qualities of a college and the weighting of these values to create one cohesive score. The assignment of numbers to determine the worth of a certain attribute necessarily requires a good deal of subjectivity and to then decide which attributes are worth more or less than others is similarly subject to opinion. While U.S. News would certainly like me to believe that its formula is purely empirical, that’s just impossible. Somebody, somewhere decides whether athletic prowess in the student body is worth more or less than a pretty campus.
On the other side of the coin is an article published last month in the New York Times entitled, “U.S. News College Rankings: Yes, They Matter.” Catherine Rampell points to several different studies that link higher rankings to things like higher class ranks among incoming freshman and increases in average SAT score. It should be noted, however, that the increases in SAT score are on the nearly insignificant order of one to two points and that the article makes the distinction that, “rankings matter more for universities, as opposed to liberal arts colleges. . .” Also, the greatest effects that resulted from a change in the rankings were those from colleges that moved on or off of the first page of rankings.
To examine the College’s actual drop in rankings more carefully, the 4-point drop (out of 100) in Amherst’s rank is mostly the result of a lower Peer Assessment (PA) rating (no, it wasn’t the class of 2012’s fault after all). This means that it is not due to lowered statistics gathered by U.S. News but a microscopically lower subjective analysis from other academic institutions, which immediately begs the question as to why this drop would occur. Perhaps a handful of other institutions took note of some of the financial difficulties the College has suffered, but there is no way to know for sure. What is most puzzling in the assessment of the drop in rankings is the comparison of some very concrete numbers: admissions rates. While admissions selectivity had literally no bearing on my decision to attend, in most cases there is a direct correlation between selectivity and rank. Only a few schools like the University of Chicago defy this trend, usually due to a self-selecting applicant pool, commonly attributed to its strange application essays (besides the school being widely regarded as “where fun goes to die”). But returning to the point — forgive the immodesty, please — Amherst’s acceptance rate is 2.2 percent lower than that of Williams, a difference one would expect to see have an impact on ranking at least through the PA score.
I would conclude in light of the abundance of evidence provided above that the rankings are useful, but only marginally so. Placing any importance on such a small variation in Amherst’s ranking is, in short, a mistake. Surprisingly, this lead to the realization that my initial response was indeed the right one; the rankings just aren’t that important. And in the College’s case, the only real damage done by the single-place drop is just in the bruising of a few overinflated egos. At least now I know why.