Tag Archives: SETs

Evaluation Time!

Steven Volk, December 6, 2015

The debate over the value of Student Evaluations of Teaching (SETs) is a long one, which I have reported on a number of times (see here and here, among others). As we move into the last week of the semester, I’d like to suggest two additional approaches to end-of-semester evaluations that can help both you and your students think about the learning that occurred in your classes. I’ll also include a “guide” I wrote in 2010 for reading your SETs when they are returned to you after grades are in.

What Helped Your Learning?

Students enter the "Natio Germanica Bononiae,“ University of Bologna (15th century). Public domain. Wikimedia

Students enter the “Natio Germanica Bononiae,“ University of Bologna (15th century). Public domain. Wikimedia

SETs are largely about how students experienced your course, and so the questions focus on issues of organization, pacing, clarity, grading, etc. As numerous articles have pointed out, Student Evaluations of Teaching don’t tell you about student learning, and they provide very little information to suggest what it is you are doing to support (or hinder) the leaning that goes on in the class. Linda Shadiow and Maryellen Weimer, writing in Faculty Focus on Nov. 23, suggest a series of questions that can help foreground student learning issues. They offer a series of fairly simple sentence stems for students to complete. For example,

  • It most helped my learning of the content when…because…
  • It would have helped my learning of the content if…because…
  • The assignment that contributed most to my learning was…because…
  • The reading that contributed the most to my learning was…because…
  • The kinds of homework problems that contributed most to my learning were…because…
  • The approach I took to my own learning that contributed the most for me was…because…
  • The biggest obstacle for me in my learning the material was…because…
  • A resource I know about that you might consider using is…because…
  • I was most willing to take risks with learning new material when…because…
  • During the first day, I remember thinking…because…
  • What I think I will remember five years from now is…because…

Shadiow and Weimer recommend that faculty also complete the same sentences. Having (almost) completed the semester, we probably have a good idea which assignments worked from our point of view and which didn’t; what readings brought out the most in discussion, and what didn’t; what homework assignments stretched student learning and what brought basically “meh” responses. Comparing our answers to the students can be revealing (or, perhaps, horrifying!).

These questions can be added to the bottom of the current SETs that you will be handing out. You can simply add an additional sheet with these questions which, still anonymously, can be returned directly to you rather than being tabulated by the department AA’s or becoming a part of your official file. Once you have you have let some time pass (see “SETs for Beginners,” below), you can look at them, particularly before preparing classes for next semester.

Student Self-Assessment

Cours de philosophie à Paris Grandes chroniques de France; Castres, bibliothèque municipale – late 14th century. Public domain. Wikimedia

Cours de philosophie à Paris Grandes chroniques de France; Castres, bibliothèque municipale – late 14th century. Public domain. Wikimedia

David Gooblar, in his “Pedagogy Unbound” blog for the Chronicle of Higher Education, recently wrote a very useful post on student self-assessment. [Note: For a good introduction to student self-assessment, see Heidi Andrade & Anna Valtcheva, “Promoting Learning and Achievement Through Self-Assessment,” Theory Into Practice 48:1 (2009): 12-19.] Ask your students to reflect upon the learning strategies they used over the course of the semester, and to consider their own habits of thinking. “Explain that the act of reflection is itself a valuable learning strategy,” he writes. Ask them how they studied for tests or what they did to prepare for their assignments; what worked for them and what didn’t? The answers you get may be quite basic, but the more often we ask students to reflect on their own learning, the more practiced at it they will become.

Gooblar calls attention to the work on metacognition undertaken by Kimberly D. Tanner, a professor of biology at San Francisco State University, particularly the “retrospective postassessments” she uses. Tanner asks students to think about what they now know about the subject (in general terms) compared to what they knew at the start of the semester. (For example: “Before this course, I thought neoliberalism was _____. Now, I think neoliberalism is _____.”) You can ask students to write about the specific ways they have changed their thinking about the topics you covered. Or you can have them write a letter to a future student of the course, to reflect on its high and low points, and tell incoming students what they wished they would have had known going in to the class and what they wish they would have done differently over the course of the semester?

Since I have my students set out their learning goals in a short paper at the start of the class – which I collect and keep – I return these to them at the end of the semester and have them reflect, one final time, on which of these goals they feel they have achieved, and which they didn’t, what they did to reach their goals and what they will do differently in their next classes. Whatever method you chose, the end of the semester is a good time to encourage students to reflect on the journey they have undertaken with you and how they are different at the end of the trip.

“SETs for Beginners” (first written Feb. 7, 2010 and slightly updated here)


A meeting of doctors at the university of Paris. From the “Chants royaux” manuscript, Bibliothèque Nationale, Paris.

By now, as you well know, there is a very large literature on student evaluations of teaching (SETs). A lot of the research points to the validity and reliability of these instruments in terms of measuring very specific areas of a student’s experience in a completed class. Some writers continue to argue that they are a worthless exercise, citing evidence that evaluations handed out after the first day of a class will yield strikingly similar results to surveys conducted at the end of the semester, that they are a measure of the entertainment-value of a class, not any value added in terms of student learning, or that they can be easily influenced (just hand out doughnuts with the questionnaires). I have come to accept three basic realities about the use of SETs:

(1) on a broad level, they help to identify outliers – a class which seems to have been extremely successful or highly troubled; (2) they should not be used as the only evaluation of teaching (informed peer evaluation following a standard observation protocol and an examination of course syllabi by experts in the field are strongly recommended as well); and (3) SETs are not a substitute for an assessment of student learning in a course. But, when read carefully, they can tell you something about your teaching on a very specific level. The question is how to read them to get that specific information, and on that score, there is very little literature.

So here’s a first attempt at this question, a kind of “SETs for Beginners.”

Because of college rules, which are likely similar everywhere, we receive our teaching evaluations back only after grades are turned in. (You should consult with your department chairs for information as to how and when to hand out SETs, whether your students can complete them online or only in class, and how they are to be collected if you do them in-class.) At some point by mid-January or mid-June, after our hard-working administrative assistants have tabulated and organized the data, we are informed that our SETs are ready to be picked up!

First decision: do you rush in to get them, play it cool, like a cat walking around a particularly lovely kibble before pouncing, or pretend that they aren’t there until, sure enough, you have forgotten all about them? I usually take the middle route on this, but, in any case, I certainly won’t pick them up on a day when the auto shop called to tell me that the problem’s in the drive train or the best journal just rejected my article. Another hit that day, I just don’t need.

When I do finally make the move, I take them to my office, put them on my desk, pretend that they aren’t there while I take a look at my email which I already checked 90 seconds earlier. Enough, already. I open the folders and read, rapidly, the overall numbers: not what I hoped for, better than it could have been, whatever… Then I put them away for at least a day or two. I don’t think I’m ready to take them on-board just yet, whether the numbers are good, bad, or indifferent. I go back to the email, the article, the gym, until I’ve absorbed the larger quantitative landscape and feel mentally prepared to explore the terrain a bit more carefully.

When I do return to the SETs, I give myself the time (and space) to read them carefully (and privately). I don’t pay much attention to any individual numbers – those have been summarized for me, but I read the comments with care… and a mixture of interest, confusion, skepticism, and wonder. How is it that the student who wrote “Volk is probably the most disorganized professor I’ve ever encountered” attended the same class as the one who commented, “This was a marvel of organization and precision”? What is one to make of such clearly cancelling comments? So here are my tricks for trying to give my SETs the kind of close reading that I think they merit:

  • Do not dwell on the angry outliers. That’s advice more easily given than taken. I have read enough teaching evaluations, my own as well as others, to know that there are some students who just didn’t like our classes and have not figured out any helpful or gracious way to say that. The fact that these are (hopefully) a tiny minority and are directly contradicted by the great majority of other comments doesn’t seem to decrease their impact, or the fact that we continue to obsess about them. (I can still quote, verbatim, comments that were written in 1987!) These bitter communiqués probably serve a purpose for the student, but they really don’t help you think at all usefully about your teaching. Let ’em go.
  • Evaluate the “cancellers”. What do you do when three students thought you were able to organize and facilitate discussions with a high degree of skill and three thought you couldn’t organize a discussion to save your life? These are harder to deal with and can add to the cynicism of those who think that the whole SET adventure is a waste of time. For the “cancellers,” I try to figure out a bit more about them to see if they represent some legitimate (i.e., widespread) concern about the class or not. Is one side of the debate generally supported by the numbers? Do I score lower in the discussion-oriented questions than in other areas, lower than in previous iterations of the course, or lower than I would have really wanted? Does the demographic information that I know about the student evaluator add context that is useful and that I should take on board? I am more likely to trust comments from seniors than from first-years. Does it appear that there is a striking gender or racial difference in terms of how students respond to specific questions? That is extremely important information to lean from and it is why we collect demographic information from our student respondents. A careful reading of this information can help us understand what is going on in our classes on a more granular level.
Brian Carson, Backyard Flowers in Black and White, No. 2. Flickr Creative Commons

Brian Carson, Backyard Flowers in Black and White, No. 2. Flickr Creative Commons

And, if none of the above helps me think about why something I have done works for some and not others, I make a note to myself to ask students explicitly about it the next time I offer the class: Please, tell me if you don’t think these (discussions, paper assignments, readings, etc.) are working so that I can consider other approaches.

  • Focus on those areas that seem to be generating the most amount of concern from students. Are they having a hard time trying to figure out how the assignments relate to the reading? Do a considerable number worry that they aren’t getting timely or useful feedback from you? Is there a widespread upset that classes run too long and students don’t have enough time to get to their next class? For each of the areas where I find a concern that has reached a “critical mass” level and is not just an angry-outlier grievance, I consider what I think about their criticism and whether, given my own goals in the course, I find it legitimate. For example, I will pay no attention to students who complain about the early hour of my class. Getting work back on time depends on the size of the class and what I have promised: in a 50-person class if I say I’ll return work within two weeks, and do so, then I won’t think much about students who complain that I only returned their work two weeks after they turned it in.

Other issues force me to think more about how I teach and what impact that has on student learning. What of students who protest that “there’s too much work for a 100-level class”? I get a lot of those comments, and it makes me think: why do students think a 100-level class should involve less work than a 300-level class? Do we, the faculty, think that a 100-level class should be less work than a senior seminar? Certainly, upper-level classes will be more “difficult” than 100-level classes (i.e., they demand that the students have acquired significant prior knowledge and skills needed to engage at a higher level), but should there be any less work involved in the entry-level class? I, for one, don’t think so – and so I won’t change that aspect of the course.

But, ultimately, when student comments suggest what is a real area of concern, when they point to something I am doing in the class that negatively impacts how students are able to learn, than I need to regard that issue with the seriousness it deserves. I will think about how I might correct the problem, and, often, the best way to do that is to talk to my colleagues and find someone in my department or outside who can read my SETs with me. That has served me well every time, and it does point to the ultimate utility of SETs for the individual faculty member on a formative level: they can help us to design our teaching to more effectively promote student learning.

  • Finally, since Oberlin really does attract faculty who care about their students and the challenges of student learning, then my guess is that your evaluations are generally good, and you need to take great satisfaction in that (see: “Don’t dwell on the angry outliers”). I have never failed to find some comments on my own evaluations that remind me yet again about how perceptive our students can be and how fortunate I am to be here.

Can We Remove the Risk from Adopting New Teaching Approaches?

Steve Volk, February 15, 2015

Last week I wrote about preparing students for active learning. This week I wanted to present one recommendation for helping interested faculty prepare more active learning teaching designs for their classrooms. I should start by saying that faculty assuredly don’t need advice from me on how to construct remarkable, active learning environments since this kind of approach happens in classrooms around the campus on a daily basis. I plan to showcase some examples as “Articles of the Week” entries very soon. Rather, my worry is that some faculty will hesitate to adopt such approaches out of concern for how they might be received by students.

Roger, Risk Management James Hotel Lobby Picture NYC NY (CC)

Roger, Risk Management James Hotel Lobby Picture NYC NY (CC)

And that’s not an idle concern. The literature seems to suggest that faculty might be evaluated more negatively in active learning contexts than in more traditional lecture courses. The Center for Teaching Excellence at Cornell cautions, in a rather understated fashion, that “Some students may not accept new learning activities with complete ease.” A 2011 study by Amy E. Covill [“College Students’ Perceptions of the Traditional Lecture Method,” College Student Journal 45:1 (March 2011)] goes further, finding that “many students may resist, and even be hostile toward, teachers’ attempts to use active learning methods.” Eric Mazur, the Harvard physics professor who has become something of a celebrity in the field of peer instruction and active learning, commented that his approach draws “a lot of student resistance.” He adds, “You should see some of the vitriolic e-mails I get. The generic complaint is that they have to do all the learning themselves. Rather than lecturing, I’m making them prepare themselves for class—and in class, rather than telling them things, I’m asking them questions. They’d much rather sit there and listen and take notes.”

While there is not a lot of reliable research on the subject, in one careful study of a large, introductory biology course (“A Delicate Balance: Integrating Active Learning into a Large Lecture Course”), the authors found that when comparing “traditional” (mostly lecture) courses with more active courses, “student evaluations of the instructors (on items such as overall teaching ability, knowledge of subject, respect and concern for students, how much learned, the course overall) were significantly and substantially higher in the traditional than in the active section” (my emphasis).

CBE Life Sciences Education

CBE Life Sciences Education

Junior Faculty, Risk-Taking, and Pedagogy

For junior faculty in particular, the risks associated with adopting more active learning techniques and moving away from standard lectures can be considerable. Many, perhaps most, will move ahead with such pedagogies regardless, because they feel comfortable with them and have found that they produce the deepest learning for their students. Some may not want to go there because they simply don’t feel comfortable using such teaching approaches. A few might be cautioned by their departments to “go slow,” waiting until after a tenure decision before shaking their students’ apple carts too forcefully. And some are sufficiently worried about their students’ reactions that they will choose to wait the 7 years until they feel less vulnerable.

Whatever the situation, it seems that a case can be made for creating a “risk-free” zone for junior faculty who are interested in introducing more active learning techniques into the mix of their teaching. This is not to say that such faculty will no longer be responsible for what goes on in their classes, a free pass of sorts equivalent to the student demand that no one should fail the course. In fact, if anything, faculty will be required to be more intentional about their pedagogic choices and to assess the results of their methods. What it will mean is that evaluation of the course will be untethered from the traditional Student Evaluations of Teaching (SETs).

risk Free

Here’s how such a proposal could work. I encourage others to chime in to clarify and improve it.

The Proposal

  1. Each semester or year (the choice between them depending on available resources), pre-tenure faculty will be allowed to designate one course as an “innovative pedagogy” class. Instructors would prepare a brief (2-3 page) prospectus of the basic pedagogic innovations they plan to employ in the course, what informs their approach (citing some of the literature that supports the approach), some examples of how this pedagogy would look in action (perhaps a description of one week of classes), and how they intend to assess the impact of their approach on student learning in the class. Interested faculty would be able to get advice and feedback at regularly scheduled workshops organized by CTIE.
  1. Proposals would be approved by department/program chairs, who, in turn, would send their approval to the dean’s office and to the director of CTIE to allow further consultation and formative observation if requested.
  1. Instructors would be expected to consult with CTIE (or other faculty recommended by CTIE) over the course of the semester.
  1. At the end of the semester, faculty would assess their courses along the lines traced out in their original (or revised) proposal and would also distribute standard SET forms to their students. These would be collected and stored in the stipulated fashion, and would go to the faculty member when grades were turned in. But they would only be sent to the College Faculty Council if so requested by the faculty member.
  1. In lieu of, or together with, the standard SET forms, the faculty member would prepare a short narrative evaluation of the course including the original design proposal, any changes made, the instructor’s evaluation of student learning and engagement in the course based on their own assessment materials, and any recommendations for changes to the course design in the future.

There are, no doubt, many issues with the proposal and many ways it could be strengthened. But encouraging junior faculty to experiment with their teaching approaches in an informed, but not unduly risky, fashion seems worth exploring further.

Students Evaluating Teaching – The Unending Conversation

The New York Times Magazine for September 21, 2008 is the “College Issue,” with a cover title, “It’s All About Teaching.” Among the articles is one on “Judgment Day” by Mark Oppenheimer. While making the point that there have been over 2,000 studies on the value of student teaching evaluations (i.e., those evaluations which all students are required to fill out at the end of our classes), the research on their utility is still mixed. The article calls attention to the way that the evaluations are subject to particular gender/race biases, how they can reward “entertainment value” over good teaching, how it is difficult to rate the sciences/math (i.e. very vertical curricula) vs. the humanities and social sciences, etc. Oppenheimer closes his article (spoiler alert!) with the following: “When students in the 1960s demanded more say in academic governance, they could not have predicted that their children would play so outsize a role in deciding which professors were fit to teach them. Once there was a student revolution, which then begat a consumer revolution, and along with more variety in the food court and dorm rooms wired for cable, it brought the curious phenomenon of students grading their graders. Whether students are learning more, it’s hard to say. But whatever they believe, they’re asked to say it.”

What do you think? SET’s are required at Oberlin and we have put a fair amount of time trying to make them more reliable and uniform. Are there better ways to evaluate teaching? What would you like to see (other than superlative comments from students on ALL your classes)?