two major lessons from today

1. Reputational rankings: maybe not so bad after all.

2. Uncertainty: if one is going to provide multiple sets of rankings and confidence intervals as a way of gesturing toward the uncertainty in the evaluation process, one might also consider, for example, trying to model the uncertainty of how to go about counting books relative to articles, rather than simply counting a book as one article and leaving it at that.

More could be said. Well, I suppose I should also say that, in case anyone looks at the NRC spreadsheet and detail and wonders, Northwestern University does indeed provide “Instruction in Statistics.” And did so in 2006! So I’m not sure how we came to be tallied as not offering that.

Author: jeremy

I am the Ethel and John Lindgren Professor of Sociology and a Faculty Fellow in the Institute for Policy Research at Northwestern University.

11 thoughts on “two major lessons from today”

  1. Jeremy —
    I’ve been trying also to figure out what’s deep in the statistical engine of the ratings. And it looks like the boiler room of a 19th century gothic science fiction novel. For one, my guess is that a major book = an article in a 3rd-tier journal for the research productivity component, the biggest part of the input. Further: If you read the report, somewhere around pp. 60 or so, you see some very revealing language which says, roughly: many of the committee members thought this was nonsense, would have preferred straight reputational ratings, and yes, there are some really weird results that you should discount, oh and by the way, we had a lot of problems with just getting the raw data right.

    Claude

    Like

  2. Ah, according to the Chronicle, average GRE score for the nonhumanities is based only on the quantitative score, which might explain how Iowa was #2 in terms of entering student scores.

    Like

  3. A well-executed reputational survey is honest about what’s being measured and captures a core feature of disciplinary fields. The NRC should have done the S-rankings more or less as they did them, but dropped measures of publications/productivity/citations in the block of variables used to produce it. Then they could have done a straight reputational survey a la the PGR (including a measure of how familiar the rater was with the department). That would leave researchers to model ex post the relationship between measurable features of departments qua organizations and reputation — instead of doing what they did to generate the R rankings (and the publication aspects of the S rankings).

    Like

  4. Re GRE choices, so they decided social science was the same as physics?

    Um, I’m too buried in other committee work to go crawling into this, which is just as well,as I’m sure it would not help my productivity on my real research to wander into this.

    But not counting books at all? Counting them the same as any journal article? For that matter, just counting journal articles. The mind boggles.

    Like

  5. I can understand how your curriculum may have been misrepresented. The data entry portal was very difficult to navigate, and what one was supposed to enter was year-specific. One slip and your chair/DGS could have erased a page of info and then just gone on with another part of the survey. It was awful. Did anyone else serve as the point person? I was chair and had to do all of it for SIU. Let me tell you, it would be very easy for a shirking DGS or chair to have skipped vital parts of the study…..

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s