The 2013/14 US News Rankings

This morning, US News and World Reports published their graduate school rankings. However, rather than report rankings based on the data they collected last fall, they decided (for the first time in history) to average data collected in 2008 and 2012 to generate many of the lists, including sociology.

Peer rankings move little, but even littler when they’re created using an average of old and new. Omar and I spent the morning determining the actual 2013/14 ranks – or at least a close approximation, assuming equal sizes of the two samples – by considering the new scores in light of the old ones.

Here are single-year 2013/14 rankings and scores, based on what we assume was collected this last round:

Image

There isn’t a lot of movement in the top 10 – although our method has Berkeley drop from #1 to #4 – but we’re sure that most readers will find something of interest in the data we used (including 2009 ranks and scores, the averages reported for 2013, the movement between 2009 and our “actual” 2013 rankings, and the discrepancy between USNWR 2013 rank and our calculation). Maybe someone in Florida paid off USNWR?

22 Comments

  1. Posted March 12, 2013 at 10:32 am | Permalink

    It’s not cooking the books, it’s simply catering to those prospective students planning to start graduate school in 2008.

    Like

  2. Posted March 12, 2013 at 11:35 am | Permalink

    The rankings provide even more evidence of economics being a winner-take-all market. At the top, it is the only social science with any 5 scores (and it has four of them). At the bottom, it has the highest proportion of departments with prestige ratings lower than 2 (40%, compared to 15% in sociology or 27% in political science).

    Like

    • Posted March 13, 2013 at 8:26 pm | Permalink

      My experience with an interdisciplinary grant panel(s) would suggest that economists are distinguished in the same way when it comes to scoring grants. They do not mess around with the middle of the scale. There’s probably a proof somewhere that if you are going to participate in a many-person evaluative exercise, the rational actor/sociopath would only use the extreme points of the scale.

      Like

  3. Posted March 12, 2013 at 2:45 pm | Permalink

    Jessica: Thanks for doing this, even though it confirms the glacial pace of change in our business.

    Like

  4. Posted March 12, 2013 at 10:22 pm | Permalink

    I did the survey this time, as a grad director. It was complicated, trying to remember where all my old friends and enemies work now, places that poached my students, or hired them, rumors and gossip about tenure cases, couples that split up, and program quality – for god-knows how many schools. Reputation is complicated – you have to get jt just right.

    Like

    • Posted March 13, 2013 at 6:36 am | Permalink

      Perhaps now, you only have to get it half right, then average it with the last results.

      Like

      • Posted March 13, 2013 at 7:11 am | Permalink

        That’s true – translating the decay function of a lingering grudge into a Lichert scale is tricky.

        Like

  5. bjrisman
    Posted March 13, 2013 at 12:05 pm | Permalink

    anyone have any idea of how to get data like this for the specialty area rankings…

    Like

    • Posted March 13, 2013 at 12:27 pm | Permalink

      Specialty area rankings are different, so I don’t think any approximation is possible.

      From US News: “Schools in the specialty rankings, which are based solely on nominations from school officials, are numerically ranked in descending order based on the number of nominations they received as long as the school/program received seven or more nominations in that specialty area. This means that schools ranked at the bottom of each specialty ranking have received seven nominations.”

      I also don’t think that these were averaged between the two years and are instead only based on data/nominations from this last round, but it’s unclear.

      Like

  6. Posted March 13, 2013 at 1:59 pm | Permalink

    There is a theory, prominent among people at schools that look better in the subfield rankings than the total rankings, that the subfield reputational rankings are more meaningful because they reflect more specific knowledge.

    Another way to look at the specialty rankings though is as a measure of breadth of excellence (in reputation): how many specialties are ranked at the top programs? Here is that list (e.g., Berkeley was ranked in 6 out of 7 possible subfields)

    6: Berkeley
    5: Michigan, Stanford
    4: Harvard, Princeton, Wisconsin
    3: UCLA
    2: Chicago, Indiana, Massachusetts, Maryland, Northwestern, NYU, Penn, UNC
    1: Arizona, Brown, Emory, Iowa, Penn State, Rutgers, Texas, UCSB, USC, Yale, Washington, Cornell

    This is the ranking for the excellent student who prioritizes reputation and doesn’t know what to specialize in.

    Like

  7. Posted March 14, 2013 at 8:16 am | Permalink

    On the methodology page, USN&WR say that they surveyed 117 departments with a response rate of 31%. So 36 people filled out this survey.

    My department, UNC, has a 4.5. Let’s assume that half the respondents gave us a 4 and half a 5. There might have been some 3s thrown in, but USN&WR tosses the top and bottom two scores.

    If we assume that this is a random sample of sociology elites and that we can compute normal OLS standard errors, this gives us a 95% CI of 4.31-4.68. With rounding to the nearest tenth, that could put us ranked somewhere between 2nd or 11th. In the middle of the pack, I bet a department’s CI includes about 20 other departments.

    Like

    • Posted March 14, 2013 at 8:30 am | Permalink

      I would love to know which of the departments actually participate in the survey.

      Maybe one of the reasons that they averaged the two data sets was the fall-off in response rates. If I remember correctly, in 2008/9, sociology was about average with 43% (but I could be transposing those numbers in my head and it was actually 34%). The other disciplines have experienced similar drops.

      Like

      • Posted March 14, 2013 at 9:29 am | Permalink

        2008 Methodology: “The surveys asked about Ph.D. programs in criminology (response rate: 90 percent), economics (34 percent), English (31 percent), history (23 percent), political science (37 percent), psychology (25 percent), and sociology (43 percent).”

        2012 Methodology: “The surveys asked about Ph.D. programs in criminology (response rate: 90 percent) [Criminology was not re-surveyed.], economics (25 percent), English (21 percent), history (19 percent), politi­cal science (30 percent), psychology (16 percent), and sociology (31 percent).”

        I guess the rise of cell phone only departments is really hurting their response rate.

        Like

      • Posted March 20, 2013 at 10:49 pm | Permalink

        In our department (Boston College), neither the Chair nor the Graduate Program Director received the survey, and we just heard from another school that they did not receive the survey either, so I wonder if there was some problem with the mailing.

        But in general, the response rates also declined between 2004 and 2008:

        Response rates for 2004, 2008, and 2012:
        Economics: 38%, 34%, 25%
        English: 39%, 31%, 21%
        History: 33%, 23%, 19%
        Political science: 40%, 37%, 30%
        Psychology: 23%, 25%, 16%
        Sociology: 50%, 43%, 31%

        Like

    • Posted March 14, 2013 at 10:25 am | Permalink

      Yikes. If I had known the N was so small I wouldn’t have revealed that I did it. Who’s gonna protect my confidentiality?

      But: Looks from the preamble as if Omar and Jessica assumed equal sample sizes.

      Like

      • Posted March 14, 2013 at 10:32 am | Permalink

        But now that we know you represent 3% of the USN&WR voters, you won’t have to pay for drinks at the ASAs ever again.

        Like

      • Posted March 14, 2013 at 11:38 am | Permalink

        Alas, we did assume equal sample sizes. In my rush to get our analyses out there, and incredulous over the averaging, I didn’t stop to check the response rates. The shame!

        Like

      • Posted March 14, 2013 at 1:12 pm | Permalink

        If my calculations are correct, weighting the scores doesn’t change much, especially if you round to 10ths like USNWR does.

        Like

    • Posted March 14, 2013 at 12:40 pm | Permalink

      Don Tomaskovic-Devey points out that I am an idiot. From the first paragraph of the methods section, “Each school offering a doctoral program was sent two surveys (with the exception of criminology, where each school received four).” So n≈62, and scores are more like +/- .15, which is still the difference between 39th and 28th place.

      Like

    • Posted March 14, 2013 at 7:15 pm | Permalink

      That approach becomes much more consistent in style with the NRC ranges that were presented in its most recent round. (People LOVED that strategy!) Those CI’s, as you point out, became wider the further down the list you went (with roughly the bottom ~half overlapping).

      Like

  8. Posted March 14, 2013 at 10:49 am | Permalink

    The rank-order correlation (I’m old fashioned) between the USN list (new, unadjusted) and the current OrgTheory poll results is .96 (.92 for the top 20).

    Like

    • Posted March 18, 2013 at 3:06 pm | Permalink

      Something that few people have talked about (that I’ve seen) is the change in the rating (rather than the ranking) over time. Consider these lists (limited to top 30ish)…

      Departments increasing: Irvine, Penn State, UT-Austin, Duke (obviously!), Penn, UCLA, Stanford, Princeton

      Departments decreasing: Berkeley, U. of Washington, Maryland

      (BTW, all of these departments moved .2 in either direction.)

      Like

3 Trackbacks

  1. [...] by way of additional measures. Omar Lizardo and Jessica Collett have already pointed out that U.S. News decided to cook the rankings by averaging the results from this year’s survey with the previous two rounds. They provide [...]

    Like

  2. [...] the results of the current poll by assuming equal weighting for both the new and old poll results (explanation). For the Departments that moved from ranked to unranked in the 2013 rankings I assumed a [...]

    Like

  3. By USNWR’s Small N problem | orgtheory.net on March 14, 2013 at 11:32 am

    [...] been looking a little more closely at the U.S. News and World Report rankings. Over at Scatterplot, Neal Caren points out that U.S. News’s methods page has some details on the survey sample size and response rates. [...]

    Like

Follow

Get every new post delivered to your Inbox.

Join 972 other followers

%d bloggers like this: