The 2013/14 US News Rankings

This morning, US News and World Reports published their graduate school rankings. However, rather than report rankings based on the data they collected last fall, they decided (for the first time in history) to average data collected in 2008 and 2012 to generate many of the lists, including sociology.

Peer rankings move little, but even littler when they’re created using an average of old and new. Omar and I spent the morning determining the actual 2013/14 ranks – or at least a close approximation, assuming equal sizes of the two samples – by considering the new scores in light of the old ones.

Here are single-year 2013/14 rankings and scores, based on what we assume was collected this last round:


There isn’t a lot of movement in the top 10 – although our method has Berkeley drop from #1 to #4 – but we’re sure that most readers will find something of interest in the data we used (including 2009 ranks and scores, the averages reported for 2013, the movement between 2009 and our “actual” 2013 rankings, and the discrepancy between USNWR 2013 rank and our calculation). Maybe someone in Florida paid off USNWR?

25 thoughts on “The 2013/14 US News Rankings”

  1. The rankings provide even more evidence of economics being a winner-take-all market. At the top, it is the only social science with any 5 scores (and it has four of them). At the bottom, it has the highest proportion of departments with prestige ratings lower than 2 (40%, compared to 15% in sociology or 27% in political science).


    1. My experience with an interdisciplinary grant panel(s) would suggest that economists are distinguished in the same way when it comes to scoring grants. They do not mess around with the middle of the scale. There’s probably a proof somewhere that if you are going to participate in a many-person evaluative exercise, the rational actor/sociopath would only use the extreme points of the scale.


  2. I did the survey this time, as a grad director. It was complicated, trying to remember where all my old friends and enemies work now, places that poached my students, or hired them, rumors and gossip about tenure cases, couples that split up, and program quality – for god-knows how many schools. Reputation is complicated – you have to get jt just right.


    1. Specialty area rankings are different, so I don’t think any approximation is possible.

      From US News: “Schools in the specialty rankings, which are based solely on nominations from school officials, are numerically ranked in descending order based on the number of nominations they received as long as the school/program received seven or more nominations in that specialty area. This means that schools ranked at the bottom of each specialty ranking have received seven nominations.”

      I also don’t think that these were averaged between the two years and are instead only based on data/nominations from this last round, but it’s unclear.


  3. There is a theory, prominent among people at schools that look better in the subfield rankings than the total rankings, that the subfield reputational rankings are more meaningful because they reflect more specific knowledge.

    Another way to look at the specialty rankings though is as a measure of breadth of excellence (in reputation): how many specialties are ranked at the top programs? Here is that list (e.g., Berkeley was ranked in 6 out of 7 possible subfields)

    6: Berkeley
    5: Michigan, Stanford
    4: Harvard, Princeton, Wisconsin
    3: UCLA
    2: Chicago, Indiana, Massachusetts, Maryland, Northwestern, NYU, Penn, UNC
    1: Arizona, Brown, Emory, Iowa, Penn State, Rutgers, Texas, UCSB, USC, Yale, Washington, Cornell

    This is the ranking for the excellent student who prioritizes reputation and doesn’t know what to specialize in.


  4. On the methodology page, USN&WR say that they surveyed 117 departments with a response rate of 31%. So 36 people filled out this survey.

    My department, UNC, has a 4.5. Let’s assume that half the respondents gave us a 4 and half a 5. There might have been some 3s thrown in, but USN&WR tosses the top and bottom two scores.

    If we assume that this is a random sample of sociology elites and that we can compute normal OLS standard errors, this gives us a 95% CI of 4.31-4.68. With rounding to the nearest tenth, that could put us ranked somewhere between 2nd or 11th. In the middle of the pack, I bet a department’s CI includes about 20 other departments.


    1. I would love to know which of the departments actually participate in the survey.

      Maybe one of the reasons that they averaged the two data sets was the fall-off in response rates. If I remember correctly, in 2008/9, sociology was about average with 43% (but I could be transposing those numbers in my head and it was actually 34%). The other disciplines have experienced similar drops.


      1. 2008 Methodology: “The surveys asked about Ph.D. programs in criminology (response rate: 90 percent), economics (34 percent), English (31 percent), history (23 percent), political science (37 percent), psychology (25 percent), and sociology (43 percent).”

        2012 Methodology: “The surveys asked about Ph.D. programs in criminology (response rate: 90 percent) [Criminology was not re-surveyed.], economics (25 percent), English (21 percent), history (19 percent), politi­cal science (30 percent), psychology (16 percent), and sociology (31 percent).”

        I guess the rise of cell phone only departments is really hurting their response rate.


      2. In our department (Boston College), neither the Chair nor the Graduate Program Director received the survey, and we just heard from another school that they did not receive the survey either, so I wonder if there was some problem with the mailing.

        But in general, the response rates also declined between 2004 and 2008:

        Response rates for 2004, 2008, and 2012:
        Economics: 38%, 34%, 25%
        English: 39%, 31%, 21%
        History: 33%, 23%, 19%
        Political science: 40%, 37%, 30%
        Psychology: 23%, 25%, 16%
        Sociology: 50%, 43%, 31%


    2. Yikes. If I had known the N was so small I wouldn’t have revealed that I did it. Who’s gonna protect my confidentiality?

      But: Looks from the preamble as if Omar and Jessica assumed equal sample sizes.


      1. Alas, we did assume equal sample sizes. In my rush to get our analyses out there, and incredulous over the averaging, I didn’t stop to check the response rates. The shame!


    3. Don Tomaskovic-Devey points out that I am an idiot. From the first paragraph of the methods section, “Each school offering a doctoral program was sent two surveys (with the exception of criminology, where each school received four).” So n≈62, and scores are more like +/- .15, which is still the difference between 39th and 28th place.


    4. That approach becomes much more consistent in style with the NRC ranges that were presented in its most recent round. (People LOVED that strategy!) Those CI’s, as you point out, became wider the further down the list you went (with roughly the bottom ~half overlapping).


    1. Something that few people have talked about (that I’ve seen) is the change in the rating (rather than the ranking) over time. Consider these lists (limited to top 30ish)…

      Departments increasing: Irvine, Penn State, UT-Austin, Duke (obviously!), Penn, UCLA, Stanford, Princeton

      Departments decreasing: Berkeley, U. of Washington, Maryland

      (BTW, all of these departments moved .2 in either direction.)


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: