“You can’t measure what we teach.”

Inside Higher Ed – Dec. 4, 2008

“You can’t measure what we teach.”

“The results [of what our students gain in the classroom] won’t be known for 10 years.”

“You’re just going to use the information to evaluate us.”

Those are just a few of the responses that Orin L. Grossman, academic vice president at Fairfield University, said he has heard from faculty members — especially in the humanities — who resist the notion that they and their colleges must find ways to measure how, and how much, their students learn in the classroom. “Their view tends to be that we should simply trust the faculty, and that the role of the administration is to keep scrutiny of them at arm’s length,” Grossman said.

His comments came Wednesday during a session on assessing student outcomes in the humanities at the annual meeting of the New England Association of Schools and Colleges, the regional accrediting agency for that part of the country. The meeting took place in Boston.

The session featured a panel of three humanists with views that were widely divergent in some ways: Grossman, a Gershwin scholar and senior academic administrator who believes higher education needs to get with the program on accountability for student learning outcomes; Ellen McCullough-Lovell, president of Marlboro College, which uses several measures of student learning but has an educational philosophy that makes its brand of assessment virtually impossible to transfer to other colleges; and David Scobey, a historian at Bates College who acknowledged being a humanities professor who has “said every one of those whining comments” that Grossman recalled, “and believes them.”

Despite those diverging starting points, the discussion revealed quite a bit more common ground than any of the panelists probably would have predicted. Let’s be clear: Where they ended up was hardly a breakthrough on the scale of solving the Middle East puzzle.

But there was general agreement among them that:

  • Any effort to try to measure learning in the humanities through what McCullough-Lovell deemed “[Margaret] Spellings-type assessment” — defined as tests or other types of measures that could be easily compared across colleges and neatly sum up many of the learning outcomes one would seek in humanities students — was doomed to fail, and should.
  • It might be possible, and could be valuable, for humanists to reach broad agreement on the skills, abilities, and knowledge they might seek to instill in their students, and that agreement on those goals might be a starting point for identifying effective ways to measure how well students have mastered those outcomes.
  • It is incumbent on humanities professors and academics generally to decide for themselves how to assess whether their students are learning, less to satisfy external calls for accountability than because it is the right thing for academics, as professionals who care about their students, to do.

“It’s in our hands — nobody is forcing us into overly prescriptive models or any one particular way at this point, and it’s our responsibility to respond to the public’s interest [in learning what value they’re getting for their tuition and tax dollars] by doing it ourselves,” said Grossman. “But the longer it’s delayed,” he warned, “the more over time the public will start saying, ‘What is really going on?’ and start pushing for the kinds of measures that nobody really wants.”

Wednesday’s session was part of the New England accreditor’s assessment forum, an event that has been attached for the better part of a decade to the annual meeting of the association’s Commission on Institutions of Higher Education. That fact alone, noted Barbara Brittingham, the commission’s president and director, challenges the frequent assertion by critics that colleges aren’t paying attention to how effectively they are educating their students.

But it is also true that the idea that colleges must measure the extent and depth of student learning is far from a fully embraced concept in higher education, and at this meeting. That is far more true, the University of New Hampshire’s Bruce Mallory said in introducing the panel, in the humanities, which are characterized by qualitative and analytic approaches, than in the sciences, which are “characterized by objective measurement, have more bounded notions of truth and fact, and for which the way we represent those bounded notions of truth and fact have been more quantitative.”

Fairfield’s Grossman, after provoking laughs with his litany of the humanists’ standard explanations for why measuring student learning is impossible in their domain (the ones that began this article), expressed frustration at the tendency of faculty members, “in extremis,” to pull out the mother of all reasons why they shouldn’t be assessed: academic freedom. He said he had taken to urging faculty members who define academic freedom to mean complete autonomy to re-read the American Association of University Professors’ 1940 statement on the concept, to realize that academic freedom was not a free pass from professional responsibility.

“It is not some kind of iron curtain faculty can draw around themselves to protect themselves from scrutiny or accountability,” Mallory, New Hampshire’s provost and executive vice president, said in reiterating Grossman’s argument.

McCullough-Lovell, the Marlboro president, distanced herself from her faculty members most skeptical about assessment, who — she said — believe that most assessment is “antithetical to the humanities,” which is designed to develop the almost unmeasurable skill of “discerning judgment.” She also cited the multiple ways that the tiny (330 student) Vermont institution measures its students’ learning, both through commonly used measurements like the National Survey of Student Engagement, participation in experiments like the Wabash National Study of Liberal Arts Education, and through requirements like the Clear Writing Program, which demands that all students submit a portfolio within their first two semesters to prove that they can, well, write clearly.

But McCullough-Lovell also said that she was not convinced that the type of assessment in which Marlboro engages would apply almost anywhere else, given that so much of it takes place in one-on-one settings between professors and students. If assessment is, as some believe, about trying to find ways to compare colleges and hold institutions accountable, Marlboro’s version of assessment probably wouldn’t qualify. “I think we could describe it, but what I’m worried about it how transferable it is to other places,” she said. Her implicit question: Would that disqualify it from some definitions of valid assessment?

Though he described himself at the start as “a bit of a skeptic and a Luddite” on the question of assessment, David Scobey, who directs Bates’s Harward Center for Community Partnerships, did not fall neatly into the pigeonhole of humanistic faculty member who rebuffs any effort to hold colleges accountable.

He agreed with the assertion, frequently put forward by the Spellings Commission and by many other observers of higher education that “the public needs all kind of good information about colleges, and we have obfuscated it.”

But on the matter of measuring student learning, especially in the humanities, he expressed reservations. Partly that grew from his nuanced and complex definition of what the humanities seek to impart to students, from the ability to engage in “meaning-making,” to a degree of “cosmopolitanism,” to a reflexive ability to assess themselves and the quality of their own learning. Those and many other “outcomes” of the humanities are difficult if not impossible to measure in “any form of high-stakes knowledge,” Scobey said, “even rich high-stakes knowledge like the Collegiate Learning Assessment,” which has become the test du jour in many circles.

Ultimately, “the question ‘How well are we doing educating our students in the humanities?’ is much closer to ‘How good is our marriage?’ than it is to ‘How good is this hotel’s service?’ ” he said. In other words, it tends more toward the subjective than the objective, is better assessed over the long term than in snapshots, and is difficult to compare, among other things.

The question probably can be answered, but with “thick description” rather than concise data, Scobey said.

Despite those reservations, the panelists seemed to agree that the days were past when humanists, or colleges generally, could say, ” ‘It’s a little too complex and nuanced for you to understand — just trust us, and write us your checks,’ ” as Mallory put it.

As the issues of cost and affordability continue to mount, Grossman said, “the public will be asking more critically than in the past, ‘What are we getting for our money?’ ” If the answers aren’t forthcoming, politicians or other will offer their own prescriptions for how to gauge that.

But right now, he said, it is still in the hands of professionals in higher education to define for themselves what their students ought to be learning and how that might be measured. “Professors don’t want a model that will trivialize the humanities. Well, what do humanities professors think is important? What do we want them to know, what do we want them to learn? They have power to shape this analysis as they like, as they wish.” For now.

Leave a Reply

Your email address will not be published. Required fields are marked *