Course evaluations are broken. The current system by which students assess professors at Duke offers all the relevant stakeholders—students, professors and University administrators—precious little value. Specifically, we feel the timing, venue, content and perceived importance of course evaluations are all lacking. To this end, we propose a number of improvements to the current system.
First, we feel the timing and venue for course evaluations is flawed. Doing evaluations near the last day of class in class is almost an invitation to students to fill them out haphazardly. This is a time when students are, first, likely to be thinking about grades rather than course material and, second, eager to simply get evaluations over with. Evaluations should be conducted online, perhaps during the middle of the semester. To ensure participation, receiving grades on ACES should be made contingent on completing the evaluations.
The content of the evaluations can also be improved. Most of the current evaluation form is composed of numerical scales. Is it really reasonable or helpful to have students rank a professor on a scale of one to five on questions like “How much did this class contribute to your progress on developing skills in oral expression?” Dry questions lead to lazy and highly subjective answers, which probably relate more to a student’s relative like or dislike of the professor than the question at hand.
Instead, course evaluations should focus on qualitative metrics. One possibility is to have a committee of students and faculty create a list of the 20 most significant qualities in a professor—say, provides good feedback. Course evaluations could then ask students to pick from the list the three best and worst qualities of each professor. This would provide even the best professors with important feedback, and allow students to pick professors that meet their needs. It would also allow administrators to make more informed promotion decisions. For example, a good candidate for director of undergraduate studies would need to have a lot of “approachability” feedback.
Similarly, the comments section on the evaluations should be hard-hitting and public. Reading comments offers key insights into the strengths and weaknesses that might not be gleaned from mere numbers. If the most salient or frequently received comments were viewable online, both students and faculty would start caring about evaluations much more.
Finally, the University must actually demonstrate its commitment to the course evaluations. Currently, almost half of professors opt-out of publicly displaying evaluations for some or all of their courses. Other professors do not even bother to turn in the evaluation forms. This is unacceptable. Only when the University begins to take course evaluations seriously can it expect students to do the same.
A separate but important problem is the documented correlation between positive course evaluations and generally high grades. This cannot be addressed solely through improvements in course evaluations, but must instead be dealt with through a University initiative to make grading distributions more consistent through all undergraduate classes. Professors should certainly not be punished for fair but harsh grading.
All told, course evaluations are virtually meaningless now, but they can be an incredibly powerful tool for assessment and feedback. A major overhaul is overdue.