I’ve written before about my work through EPC on grading policy. After a year’s worth of consideration, we are presenting a resolution tomorrow for UNC to report grade distributions on transcripts for each class, and to report grade patterns to faculty each semester.
Two colleagues wrote me a detailed and thoughtful message about the proposal, and while I do not agree with their position, I asked and they agreed to have me post it to scatterplot for further discussion. Their message is below the break; my response and further discussion is posted as the first responses to the post.
We are writing in response to Faculty Council Resolution 2010‐3 on “Enhanced Grade Reporting.” We appreciate the work of the committee, but have serious reservations about the proposal. We hope you’ll distribute/forward our email to members of the Faculty Council so that this view can become part of the conversation. Instead of adopting the proposal, we call for the start of a serious conversation about the *purpose* of grades and the factors that may contribute to variations in grade distributions across time and courses.
Like the Achievement Index, reporting course and section grade distributions along with student individual grades will presumably allow viewers of the transcript (employers, award committees) to assess the student’s performance against those of her/his peers. This goal assumes that our primary purpose as educators is to differentiate between students and that grades are the primary markers of this differentiation.
An *alternative view* is that our primary goal is to successfully educate our students in the substance and methods of our respective fields, and grades should be used to mark the performance of the student against predetermined standards of proficiency and learning objectives that instructors painstakingly develop for each course. In this view, the loftiest goals to which we can aspire would be to set appropriately tough standards that are in line with the expectations of our profession, and then work diligently and creatively to help all of our students reach those goals. Achieving these goals would result in a better educated populous but would certainly not eliminate grade compression. In fact, accomplishing this goal would *fairly* result in lots of As. This perspective on grading is called “the mastery method” among education scholars.
“Grade inflation” implies that higher grades are being awarded for comparable work done in the past. Supposedly our standards for performance have declined while student performance has remained relatively stable or declined. We have little doubt that slipping grading standards *could* be *one* of the factors explaining variations in grade distributions across time and courses. This is certainly an issue that deserves additional study, starting with serious conversations, both within and across departments, about the establishment of rigorous learning objectives and fair grading standards.
However, educational standards have surely increased over time as well. For example, how many professors in the 1970s required their students to write well, analyze data, lead discussion sections, AND perform community service? Moreover, there are many other factors besides changing standards that likely contribute to variations in grade distributions across time. Indeed, increases in average grades might result from the development of more sophisticated learning and teaching tools, stronger economic incentives among students to earn good grades, or improvements in the proficiency of teachers to convey expectations and imbue their students with necessary skills.
Increases in average grades may also reflect powerful selection processes. With improvements in the distribution of information on the content of courses and the teaching style of specific instructors, students are now better able to select courses that fit their interests and strengths, allowing them to perform better. In light of these changes, it is quite likely that actual student performance and proficiency has increased over time, possibly at a faster pace than increases in learning objectives and grading standards. In this sense, it would be inaccurate to conceive of increasing grade averages as true “grade inflation,” and we would conceive of different remedies for the issue, assuming that increasing grade averages, by themselves, constitute a problem in need of remedy.
Similarly, variations in grades across courses and sections, either in the cross-section or across time, are likely a product of a wide range of factors. For example, the strongest students were choosing very different majors twenty years ago than they are today and there has been similar shuffling between majors between these extremes. Larger increases in grades in some disciplines than in others might just reflect the fact that selection processes have resulted in stronger increases in the quality of students who choose different majors. Similarly, variations in professional emphasis on pedagogy might result in the adoption of stronger teaching methods in some fields than in others and we would expect variations in student performance to emerge as a result, even if all disciplines and courses have similarly rigorous standards. Even changes in grade distributions for individual faculty members could reflect the adoption of improved teaching techniques and strengthening ability of individual instructors to inspire their students. Indeed, these individual-level improvements in teaching performance are exactly what we expect from our faculty development efforts.
As with any policy rooted in incomplete information, there is a strong chance that the proposed actions would do more harm than good. Encouraging the comparison of an individual’s grade relative to the distribution of grades for the course, either through some kind of Achievement Index or through the steps currently on the table, will necessarily punish the students who do well in classes in which their classmates also succeeded. As intended, it will devalue the grades of those students who habitually seek out the instructors who construct overly easy courses and dole out easy As to undeserving students. But it would also punish those students who take courses from excellent instructors who strive to help all of their students achieve lofty goals. It would also increase status competition among students and make them less interested in working together, the hallmarks of participatory learning and the teaching of cooperation and teamwork. At the same time, the steps in the current proposal will embolden faculty members whose courses have what might be deemed sufficiently low grade distributions, giving them no incentive to think about whether the fact that so few students earn high marks in their classes might reflect their inability or unwillingness to assist students in achieving excellence. We are very concerned about these consequences.
All of this suggests that the factors behind any increase in average grades and variations across disciplines, courses, and sections are exceedingly complex and cannot be easily equated with a grade-inflation phenomenon or reduced to a simple slipping-standards argument. All of these factors deserve additional attention if we are to develop effective remedies for any flaws in the current grading system. Rather than going forward with Faculty Council Resolution 2010‐3 on “Enhanced Grade Reporting,” we recommend rigorous study of the factors affecting changes in grade distributions, as well as increased conversation about pedagogical goals and the *purpose* of grades in the attainment of those goals.
Sherryl Kleinman, Professor
Department of Sociology
Kyle Crowder, Professor
Department of Sociology