hmm.

Top 10 sociology programs in terms of quality of graduate students, using the primary measure in the NRC (average quantitative GRE score):

1. UNIVERSITY OF IOWA
2. STANFORD UNIVERSITY
3. YALE UNIVERSITY
4. PRINCETON UNIVERSITY
5. UNIVERSITY OF CALIFORNIA-BERKELEY
6. UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
7. UNIVERSITY OF MICHIGAN-ANN ARBOR
8. HARVARD UNIVERSITY
9. COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
10. NEW YORK UNIVERSITY

The other measure of graduate student quality is percentage of first year students with external fellowships. That top 10:

1. TEMPLE UNIVERSITY
2. UNIVERSITY OF ARIZONA
3. WAYNE STATE UNIVERSITY
4. UNIVERSITY OF NORTH TEXAS
5. HARVARD UNIVERSITY
6. UNIVERSITY OF CALIFORNIA-SAN DIEGO
7. UNIVERSITY OF PENNSYLVANIA
8. PRINCETON UNIVERSITY
9. UNIVERSITY OF CALIFORNIA-LOS ANGELES
10. OKLAHOMA STATE UNIVERSITY MAIN CAMPUS

Author: jeremy

I am the Ethel and John Lindgren Professor of Sociology and a Faculty Fellow in the Institute for Policy Research at Northwestern University.

11 thoughts on “hmm.”

  1. Not sure if this is the source of your “hmmm,” but the second list is likely a function of how people interpreted “external.” My uni is not on this list, even though 100% of our first year students get a fellowship. (And, in fact, all students get 2 years of fellowship in the guaranteed 5-year package.) But, because the fellowships come from the Graduate School, whoever filled out the data sheet probably didn’t count them as “external.”

    Garbage in, garbage out.

    Like

  2. I think external was supposed to mean external to the university, e.g., an NSF pre-doc. I haven’t investigated the funding systems of Temple or North Texas.

    I intended a polysemic “hmm.”

    Like

  3. I didn’t count fellowships internal to our University as external. So, we maybe had one Fulbright in 2006 out of 35-40 active students. If I counted special fellows (Dean’s fellow, minority fellows, University fellows) it would have been 40% of students. I’m guessing there was a lot of variation in how Chairs filled out the surveys….and even if on some sections of the nightmarish operation.

    Like

  4. The instructions for item E8 in the Program Questionnaire reproduced in the NRC’s Guide to Methodology note that:
    “Financial support is funding provided by your institution or program or by an
    external funding agency or organization. It does not include personal, spouse, or
    family support, wages from work unrelated to the program, or loans”

    That makes it sound like external support is not just funding from outside the department but from outside the institution.

    A separate measure, “Percent Students Receiving Full Support in the First Year (Fall 2005),” captures “all types of support” – both internal and external.

    Not sure of the relative weight given to these two measures in the rankings calculations.

    Like

  5. Note that international students are not allowed to apply for NSF fellowships (they’re allowed to apply for the dissertation grant putting their advisor as a PI, but can’t get first-year funding), so number of students with external fellowships may also be a measure of provincialism of the program

    Like

  6. Would it be that difficult to collect about 10 simple stats from chairs as they conduct their reputation rankings? Chairs could then consult the previous year’s statistics as they rank programs. This would allow chairs to make more informed decisions, while avoiding the uselessness of the NRC rankings.

    Would it be that difficult to collect and centralize the following statistics? articles/fac; big 3-5 articles/fac; books/fac; The same for grad students + an indicator or two about recent job placements? After they’re collected for a while, you could also have 3 year averages. It’s imperfect, but it would probably be a big improvement over what we currently have going on.

    Like

    1. Big 3-5? I’ve never heard of a “big 5” before. I don’t even think a “big 3” is consistent with contemporary disciplinary market indicators, although I understand it’s a notion with inertia. There’s a big 2, after which there is a good deal of ambiguity.

      Like

      1. If by “big ‘3’ is [not] consistent with contemporary disciplinary market indicators” you mean Social Forces ranks 23rd among sociology journals for impact factor on ISI’s Web of Knowledge, I must say, why’d you have to bring that up?

        Like

      2. Big 2-5 can be left for someone else to decide. As you said, there’s always ever-changing debate over the status of ARS, Social Forces, (and sometimes) Social Problems articles. The point, however, is that if chair’s had more info they might be able to make more informed decisions. Big 5 is probably better than Big 0.

        Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s