By now, as you well know, there is a very large literature on student evaluations of teaching (SETs). A lot of the research points to the validity and reliability of these instruments in terms of measuring very specific areas of a student’s experience in a completed class. Some writers continue to argue that they are a worthless exercise, citing evidence that evaluations handed out after the first day of a class will yield strikingly similar results to surveys conducted at the end of the semester, that they are a measure of the entertainment-value of a class, not any value added in terms of student learning, or that they can be easily influenced (just hand out doughnuts with the questionnaires). I have come to accept three basic realities about the use of SETs: (1) on a broad level, they help to identify outliers – a class which seems to have been extremely successful or highly troubled; (2) they should not be used as the only evaluation of teaching (peer evaluation and a study of course syllabi are strongly recommended as well); and (3) SETs are not a substitute for an assessment of student learning in a course. But, when read carefully, they can tell you something about your teaching on a very specific level. The question is how to read them to get that information out, and on that score, there is very little literature. So here’s a first attempt at this question, a kind of “SETs for Beginners.”
Because of college rules, we get our teaching evaluations after grades are turned in. At some point in mid-January and mid-June, then, by the time our hard-working administrative assistants have tabulated and organized the data, we learn that our SETs are ready to be picked up. First decision: do you rush in to get them, play it cool, like a cat walking around a particularly lovely kibble before pouncing, or pretend that they aren’t there until, sure enough, you have forgotten about them? I usually take the middle route on this, but, in any case, I certainly won’t pick them up on a day when the auto shop called to tell me that the problem’s in the drive train, the best journal just rejected my article, and poured curdled milk on my granola. Another hit that day, I just don’t need.
When I do finally make the move, quickly and resolutely – no backtracking allowed – I take them to my office, put them on my desk, pretend that they aren’t there while I take a look at my email which I already checked 90 seconds earlier. Enough, already. I open the folders and read, rapidly, the overall numbers: not what I hoped for, better than it could have been, whatever… Then I put them away for at least a day or two. I don’t think I’m ready to take them on-board just yet, whether the numbers are good, bad, or indifferent. I go back to the email, the article, the gym, until I’ve absorbed the larger quantitative landscape and feel mentally prepared to explore the terrain a bit more carefully.
When I do return to the SETs, I give myself the time (and space) to read them carefully (and privately). I don’t pay much attention to any individual numbers – those have been summarized for me, but I read the comments with care… and a mixture of interest, confusion, skepticism, and wonder. How is it that the student who wrote “Volk is probably the most disorganized professor I’ve ever encountered” attended the same class as the one who commented, “This was a marvel of organization and precision”? What is one to make of such clearly cancelling comments? So here are my tricks for trying to give my SETs the kind of close reading that I think they merit:
- Do not dwell on the angry outliers. Advice easier given than observed. I have read enough teaching evaluations, my own as well as others, to know that there are some students who just didn’t like your class and have not figured out any helpful or gracious way to say that. The fact that these are (hopefully) most often a tiny minority and are directly contradicted by the great majority of other comments doesn’t seem to decrease their impact, or the fact that we continue to obsess about them. (I can still quote, verbatim, comments that were flamed my way in 1987!) These bitter communiqués probably serve a purpose for the student, but they really don’t help you think at all usefully about your teaching. Let ‘em go.
- Evaluate the “cancellers”. What do you do when three students thought you were able to organize and facilitate discussions with a high degree of skill and three thought you couldn’t organize a discussion to save your life? These are harder to deal with and can add to the cynicism of those who think that the whole SET adventure is a waste of time. For the “cancellers,” I try to figure out a bit more about them to see if they represent some legitimate (i.e., widespread) concern about the class or not. Is one side of the debate generally supported by the numbers? (Do I score lower in the discussion-oriented questions than in other areas, lower than in previous iterations of the course, or lower than I would have really wanted?) What does the information that I know about the student evaluator tell me that is useful? (I will trust comments from seniors who are assessing the class on the basis of others they have taken more than comments from first-years; does it appear that there is a striking gender difference in terms of how students respond to specific questions; do the readings work for majors but not for non-majors? We collect information about student respondents because this can help us understand what is going on in our classes on a more granular level.) And, if none of the above helps me think about why something I have done works for some and not others, I make a note to myself to explicitly ask students the next time I offer the class: Please, tell me if you don’t think these (discussions, paper assignments, readings, etc.) are working so that I can consider other approaches.
- Focus on those areas that seem to be generating the most amount of concern from students. Are they having a hard time trying to figure out how the assignments relate to the reading? Do a considerable number worry that they aren’t getting timely or useful feedback from you? Is there a widespread upset that classes run too long and students don’t have enough time to get to their next class? For each of the areas where I find a concern that has reached a “critical mass” level and is not just an angry-outlier grievance, I consider what I really think about their criticism and whether, as a teacher with some experience under my belt, I find it legitimate. For example, I will pay no attention to students who complain about the early hour of my class. Getting work back on time depends on the size of the class and what I have promised: in a 50 person class if I say I’ll return work within two weeks, and do so, then I won’t think much about students who complain that I only returned their work two weeks after they turned it in.
Other issues force me to think more about how I teach and what impact that has on student learning. What of students who protest that “there’s too much work for a 100-level class”? I got a lot of those comments the last time around, and it made me think: why is it that students think a 100-level class should be less work than a 300-level class? Do we, the faculty, think that a 100-level class should be less work than a senior seminar? Certainly, upper-level classes will be more “difficult” than 100-level classes (i.e., they demand that the students have acquired significant prior knowledge and skills needed to engage at a higher level), but should there be any less “work” involved in the entry-level class? I’ve had fun thinking about that.
But, ultimately, when student comments suggest a “real” area of concern, i.e., that I have done something in the class that negatively impacted how students were able to learn, than I need to regard that issue with the seriousness it deserves. I need to think about how I might correct the problem, and, often, the best way to do that is to talk to my colleagues. That has served me well every time, and it does point to the ultimate utility of SETs for the individual faculty member on a formative level: they can help us to design our teaching to more effectively promote student learning.
Finally, since Oberlin really does attract faculty who care about their students and the challenges of student learning (as well as their own scholarship and many other things), then my guess is that your evaluations are generally good, and you need to take great satisfaction in that (see: “Don’t dwell on the angry outliers”). I have never failed to find some comments on my own evaluations that remind me yet again about how perceptive our students can be and how fortunate I am to be here.
Let us know how you think about your own student teaching evaluations. How do you read them? Add your comments below and join the conversation.