By a Nose

Jockeys wearing academic regalia racing on horseback toward "Top Ranked University" finish line

Graduate School and College Excellence
Does research reputation influence undergraduate rankings?

Alexander Hicks, Professor of Sociology

Vol. 8 No. 2
October/November 2005

Return to Contents

By A Nose
Jockeying in the Rankings Race

The Current Standings

Whither the NRC Study?

"I am not going to change our methods of calculation just in order to try and achieve a ranking higher than another institution."

"Part of the reason educational reputation is so important is because people—students, faculty, and administrators—derive much of their status from the status of their institution."

Graduate School and College Excellence
Does research reputation influence undergraduate rankings?

Peer Scorings and Rankings of Colleges and Graduate Programs and Research


The “Lecture Track” Reconsidered
Professional identity and aspiration among non-tenure-path faculty

Tales from the Lecture Track: Kristin Wendland, Music

Tales from the Lecture Track: Sheila Tefft, Journalism

Virtue and the Stewardship
of Academic Freedom

Reflections on ambition, conversation, and community



Academic reputation is an essential academic resource, crucial for recruiting the best students and faculty. It surely resides with one’s academic peers. Accordingly, the most highly weighted criterion used in the college rankings of the U.S. News & World Report’s “America’s Best Colleges” is a measure of “peer assessment.” The graduate program rankings of the National Research Council and U.S. News also rely
heavily on peer assessments. Are these two kinds of assessments—graduate and undergraduate—connected? Do academics assess other colleges mainly in terms of their scholarly quality—a more accessible matter than teaching?

To find out, I examine the relation between “peer assessments” in the U.S. News “America’s Best Colleges, 2005” for colleges at National Research University (NRU) level and rankings of graduate programs in the U.S. News “America’s Best Graduate Schools, 2005” for departments of these same colleges.

If scholarly reputation of graduate departments drives peer assessments of colleges, then the peer assessments of colleges in “America’s Best Colleges” should resemble those of graduate programs in “America’s Best Graduate Schools.” If this resemblance proves true, then improved quality and reputation will depend substantially upon improved graduate program quality and reputation. Indeed, the so-called “peer assessment” criterion of “America’s Best Colleges” receives a greater weight (25 percent) than any other criterion in the overall score.

To construct its collegiate peer assessment scale, the U.S. News surveys “presidents, provosts, and deans of admission,” asking them to “rate peer schools’ academic programs on a scale from 1 (marginal) to 5 (distinguished),” in order to get a summary assessment that will “account for intangibles such as faculty dedication to teaching.”
In assessing colleges at NRUs, it surveys peers at other NRUs. To construct scores of research and graduate program quality at NRU colleges, I construct a simple program score that sums scores for twelve disciplines: two in the humanities (English and history), four in the social sciences (economics, political science, psychology, and sociology), and six in the sciences (biology, chemistry, mathematics, and physics, plus applied mathematics and computer science). For colleges that lack one or more Ph.D. programs (as mit lacks an English Ph.D.), and to deal with “Best Graduate Schools’” selection of programs (with at least as many mathematics programs as humanities), I constructed three variants of this score. I averaged the “simple program score” and the three additional indices together to create the “average score” in the accompanying table (all measures are described in more detail in an appendix available

The table presents the simple program score and that average score, along with overall U.S. News college rankings and admissions selectivity from “America’s Best Colleges, 2005,” and “peer assessment” scores from “Best Colleges.” Some statistical analyses for the top forty NRUs (including thirty private NRU colleges) show a striking similarity between collegiate peer assessment and aggregated graduate program/research scores. Most straightforwardly, the average score correlates 0.858 with “peer assessment” (the average correlation for all components of this measure is 0.832). The scores of college and graduate/research assessment are also similar in magnitude. The main difference is that graduate/research scores tend to be smaller than “peer assessment” scores, and occasionally much smaller as in the cases of schools with relatively limited Ph.D. offerings (such as Dartmouth and Georgetown, with only five of a possible twelve relevant graduate programs).

Apparently, when scholars judge colleges, they do so in terms of the public currency of scholarship and graduate training rather than the elusive currency of collegiate pedagogy.

This is not the place to attempt to fully explain the origins or interpret the meaning of “peer assessments.” Still, to get some idea of what besides graduate and research assessments might underlie “peer assessments” of NRU colleges, I looked at “Best Colleges’” summary data on its other dimensions. These include ranks for “graduation and retention rates,” “faculty resources” like teacher/student ratios, “student selectivity,” “financial resources,” and “alumni giving.” All of these dimensions of college ranking correlate significantly with “peer assessment.” Only one, however, correlates with “peer assessment” comparably with our measures of graduate/research ranking: student selectivity correlates -.837 with “peer assessments” (negatively because high ranks have low values like 1.0). The rest correlate considerably more weakly (between -.368 and -.660) with “peer assessment” for the thirty
private colleges.

It appears that when academics at nru colleges judge the merits of colleges at other nrus they think of them, in effect, in terms of impressions of those schools’ graduate and research excellence and student selectivity, and not much else.

Two interpretations come to mind for what value we should grant a “peer assessment” of NRU colleges that preponderantly measure scholarship and students but not their pedagogy. On the one hand, we might regard the faculty accomplishment and student competitiveness as quite distinct from pedagogic quality—and pedagogy as paramount. To the extent that we do, the “peer assessment” measure does not merit our regard.

On the other hand, we might value what “presidents, provosts, and deans of admission” rate when they “rate peer schools’ academic programs.” We might do this because we value these persons’ insight or because we see others’ opinions as self-fulfilling prophesies with concrete consequences not to be ignored. After all, knowledge of teaching quality at others’ institutions (if not our own) is readily available only in such poor proxies as student-faculty ratios and the dubiously sampled surveys of the Princeton Review publication. Teaching quality thus might actually be picked up by U.S. News’s “peer” measure as an aspect of quality that scholarship and student exclusivity can’t explain. Perhaps scholarship, as a basis for strong teaching content, is one notable contributor to good teaching. High “peer assessments,” if only as components of summary college rankings, attract good students, who help educate each other. Indeed, the theory that scholarly reputation drives student quality and that scholarship and student quality together enhance teaching is common at top research universities.

Emory’s rankings on the key dimensions of assessment in “America’s Best Colleges, 2005” are 29 for “peer assessment,” 24 for “graduation and retention rates,” 9 for “faculty resources,” 15 for “student selectivity” and “financial resources,” and 36 for “alumni giving.” We rank low on the single most important factor in the rankings. This factor in turn depends on our peers’ assessment of our scholarly prowess. Strictly from the perspective of enhancing the college’s reputation, then, improving the graduate school would appear to be a relatively effective, even optimal, policy. A lack of good, comparable teaching data does not allow me to eliminate the possibility that graduate school enhancement might entail an equal loss in college teaching quality. Between scholarship’s contributions to teaching, reputation or not, and our varied ways of supporting good teaching here at Emory, however, I think that the members of the college should view the present drive to enhance graduate education and scholarship at Emory as a windfall.