We had a good discussion of the NRC today. An important factor in the methodology that has not been highlighted in the previous scatterplot discussions is the decision to standardize all metrics by dividing by the number of faculty. The people with the highest number of publications, awards etc tend to be a relatively small number of highly productive senior people who have had a whole career to produce. If a department consists only of, say, five such people, it will score very high compared to another department with five highly-productive senior people that also has a lot of assistant professors. That is, the metric inherently penalizes younger departments and, to a lesser extent, larger departments. If your goal was to score well on the NRC scheme, you’d do best by having a small faculty of only full professor stars complemented by a large adjunct faculty (who don’t count in the statistics) to do the undergraduate teaching. Departments are penalized for hiring assistant professors rather than adjunct lecturers. It also appears that there is no premium in the ratings for being strong in a wide variety of subareas of a discipline.
One colleague made the point that, at a minimum, one needs to do “apples to apples” comparisons. For example, comparing the publication records of full professors, or the publication records of people who received their PhDs in the 1980s, for example.
This is just added to prior discussions of erroneous data, not counting books at all, counting all articles the same regardless of quality, ignoring subfield size differences that affect citation counts, evaluating grad students by GREs, etc. It probably accounts for some of the peculiarities in the regression scores.
We are a large department and have a lot of assistant professors, so we are particularly penalized by the regression rating scheme the NRC used, but we recognize that systems that reputation surveys tend to reward sheer size and have tremendous inertia.