Cultural organizations in general and state humanities councils in particular have long struggled with the question of how to evaluate what they do. Is a good head-count at an event sufficient? Is a high level of engagement during a post-performance talk-back adequate proof that there’s thinking going on? How can certain program concepts be packaged for funders so that they will be perceived as compelling and important? Sometimes it seems so difficult to define and measure internal change that one wonders if any attempt at evaluating the effect of a project that aims to present intellectual content to a group is worth the effort.
And yet we persist; grant makers haven’t stopped caring about measurable goals and descriptions of meaningful impact, and staff members at organizations like mine haven’t stopped trying to define success and describe why we know we’re successfulor not.
Increasingly, we aim to define specific changes that our programs effect in participants, and the more that change is described in practical terms the better. Perhaps a concrete example is in order, such as the Massachusetts Foundation for the Humanities’ evaluation efforts for its flagship program, the Clemente Course in the Humanities, for which fellow Public Humanist Jack Cheng is an art history teacher.
The Clemente Course is unique among MFH offerings and has been the most successful of all its programs at attracting donations, probably because it provides a universally recognized gain (college level courses and transferable college credits) for underserved adults in poor Massachusetts communities. Interns from the Center for Public Policy at the University of Massachusetts Amherst designed the longitudinal evaluation study, and MFH pays a consultant to conduct phone interviews with current and former Clemente students, interpret data, and keep up with the changing yearly tasks involved with tracking people’s lives after they graduate from the program. Both qualitative and quantitative information is sought with the study. What MFH is hoping to be able to demonstrate is that a significant percentage of Clemente Course graduates go on to enroll in college and that the positive effects of the Clemente Course are lasting for the students and others in their lives.
Compared to other kinds of programs we launch and support, none of which approaches the depth or expense of the Clemente Course, that one’s easy, and I can attest to the fact that it’s been laborious, time consuming, and expensive to develop a specific evaluation tool that we believe will provide meaningful hard data for our grant proposals seeking outside funding for the program. Increasingly we have learned that what we often think of as the self-evident value of the humanities is lost on those in a position to fund it.
Our organization, like many other cultural organizations and institutions, is up against the idea that public opportunities to deepen one’s engagement with ideas are non-essential frills in a world full of people with intense basic needs.
As a result, we face mounting pressure to think about devising and supporting programs that will yield easy-to-digest results to people in a position to give money. And while it’s certainly not a bad thing to think hard about defining a change (what all evaluation tries to do) that can be measured, certain other aspects of the humanitiessay, the pleasure principlebecome secondary concerns. I have yet to add “joy quotient” to a logic model table, or ask the question: “On a scale of 1-10, how much happiness did this program inspire in you?” in a survey, or ask our own grant application review committee to rate the likelihood that any given program will promote a sense of well-being and happiness for participants and audience members.
But I really can’t end on a note that dismisses the rigor and legitimacy of evaluation efforts. It is a worthy exercise to ask oneself, “what change do we want to this project to promote?” and “how can we measure that change and present it in a persuasive manner?”
One of the main problems with the scenario I’ve described is that philanthropic organizations with large sums of money to distribute want the kind of data that only expensive, professional evaluation processes can yield. I for one, would welcome a chance to get better at designing evaluation tools, and I’d like to be met with a general acceptance of the premise that efforts to create public learning opportunities and promote wisdom and pleasure are self-evidently valuable and worthy of support.
–Hayley Wood, Program Officer, Massachusetts Foundation for the Humanities