nrc rankings

The NRC rankings appear to have driven me out of blogging retirement. Here are some understandings I have about the rankings for sociology after reading material from the Chronicle and the report’s Appendix. Corrections welcome.

1. Books are not counted in the publications per faculty member figure. At all.
2. Citations to books are not counted in the citations per faculty member figure. At all.
3. Multi-authored publications are counted as 1 publication per each author and 1 citation for each author in the citation count.
4. The average GRE score figure is based on the quantitative GRE only.

Also, if you are wondering about the #1 sociology programs in some key areas:
Most publications per faculty member: University of California-San Francisco
Most citations per faculty member: University of New Hampshire
Average time to Ph.D. for students: Bowling Green State University (3.25 years!)
PhDs with academic jobs: University of Miami
Average GRE score: University of Iowa
Percentage of students completing in 6 years or less: Baylor University
Percentage of new students with external grants: Temple University

Author: jeremy

I am the Ethel and John Lindgren Professor of Sociology and a Faculty Fellow in the Institute for Policy Research at Northwestern University.

7 thoughts on “nrc rankings”

  1. It takes about 5 minutes to calculate the midpoints of those intervals and then order everyone into a rough point-estimate rank. That little quick-and-dirty exercise makes some of the surprises in the ranking a little more clear. Here’s a few from the “S” rankings:

    Miami #7
    Nebraska #12
    UC San Francisco #14
    Chicago #18
    Bowling Green #20
    New Hampshire #24
    Indiana #27
    Wisconsin #28
    Northwestern #33
    Baylor #34
    Berkeley #35 (Note: #1 in most recent US News survey)
    Syracuse #36
    UCLA #67

    Like

  2. The broad pattern of the S-outcomes seems what we’d expect from a combination of bad measures and a failure to capture any good measure of scholarly quality. Bad metrics coupled with the weights from the S-survey will tend to idiosyncratically benefit specific schools in very particular circumstances (e.g., UCSF sociologists with dozens of publications in medical journals, each co-authored with many people), while systematically downgrading schools whose quality is not captured by the measure. So we see S-rankings negatively affecting schools — especially Public schools — whose faculty specialize in books, who have large graduate programs with a lot of heterogeneity, whose students are not all that well funded, who take longer than average, and so on. Hence the fact that Berkeley and Wisconsin end up well down the S-list.

    I wonder whether respondents to the S-survey were asked about the importance of a measure under a general description (“How important to you think faculty research productivity is to program quality?”) or with respect to the actual instrument (“How important to you think the volume of published items, excluding books and without consideration of the venue of publication, is to program quality?”) I’m guessing the former: the weights are calculated from a general consideration and applied to numbers from a particular and probably crummy measure.

    The R-ranks have the benefit of incorporating some influence from a measure of reputation and quality, but because it’s been put through the mangle you can only see it intermittently at work.

    Like

  3. The ranks are one thing, but the data are useful for lots of other things. For example, where can we do better, and how? Northwestern had much higher minority and female student composition than us. If I knew anyone from Northwestern, I might ask him/her about it at a Scatterplot party. Penn State — to whom we lose students in recruitment pretty often — has great time-to-degree and completion rates. What are they doing? Etc.

    Apart from the competition, there is a chance the data could help departments improve themselves, motivate their deans, and so on.

    Like

  4. Not counting books or citations to books is just ridiculous (and I’m an article writer). I think this is the easiest flaw to point out to deans, prospective students, etc. One wonders how the committee putting together the NRC ranking methodology did not catch this major oversight…

    Like

  5. These four issues, and especially not counting books at all, seem like fatal flaws. Does anyone know if there is discussion of these problems among scholars or blogs in anthropology or political science? These problems would seem just as relevant for rankings in these disciplines. Someone should write an article for The Chronicle of Higher Education about this.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s