What do evaluations actually do? Well, anything, really. When I did evaluations at the state history museum, I developed simple instruments to test exhibit graphics on middle schoolers, get audience feedback on exhibit titles with attraction power, and to prove that additions to a recurring exhibit accomplished their goals. Evaluations can range from simple tests of label copy and design to complex studies that reveal if a program reached its outcome goal or how visitors acted on their museum experience months afterward. In general, the idea is that these are tools to help program developers incorporate visitor pre-knowledge, expectations, and behavior into the development process with the goal of making exhibits accessible, meaningful, and surprising.
Elizabeth Wood of IUPUI and The Children’s Museum of Indianapolis, (again, not a history museum) establishes some basic guidelines for program evaluation. “A successful evaluation,” she notes, “is built on clear, thoughtful, and focused questions that can support improvement, use, and application of new ideas.” (13) Clarity is the key. “When the purpose of the evaluation process is clear, it will provide the evaluator with a good sense of what information can answer your questions, and helps frame the scope of the project as a whole. Knowing the scope of your evaluation project will help you get a sense of the resources (time, money, and people) needed.” (13)
Wood describes the scope of small, medium, and large evaluations, noting that staff time, budget, outside experts, space requirements, time for analysis, and immediacy of results all depend on those simple questions at the beginning: what do you want to know, and how will you use the information you gather?
Instead of laying out Wood’s entire article, I’m going to turn to this troublesome Civil War in the Community of Nations exhibit. How would evaluations work for this exhibit and the museum that hosts it?
So… I intend to run three evaluations during the life of this exhibit. The first will discern accessibility to the topic. The second will test preliminary exhibit designs for visitor connection. The third will assess visitor outcomes.
The front-end evaluation will be medium scale and consist of focus groups of structured and semi-structured questions with members of target audiences. They will seek to find accessible points in an academic thesis. They will take place at the museum or an outside venue. The evaluation will require two staff members (a facilitator and note taker), and will collect identity and behavior data that the museum has identified as desirable. Groups will be presented with the interpretive theme, a précis of the story, and a series of ideas, human stories, and objects. Reactions will be recorded and opinions solicited, based on IPOP schema. A gift shop premium will be offered to participants. This front-end evaluation will direct curatorial decisions on story selection and artifact and image lists, and suggest strategies to creatively shape exhibit themes, opportunities for visitor engagement, and likely moments of intellectual, cognitive, or somatic revelation.
The formative evaluation will be a prototype review conducted with museum visitors in an evaluation incubator in the museum building with semi-structured and unstructured inquiry and observation. It will test the attraction power of an element, clarity of a text label, engagement power of an artifact, or a designed “flip” moment. Over two weekends and two weekdays, visitors will be invited to view a low-cost prototype, consisting of printer-produced images, graphics and text, artifacts or facsimiles, and physical elements constructed from foam core and butcher paper. Two staff members will be required to conduct the evaluation, and will collect identity and behavior data that the museum has identified as desirable. Reactions will be recorded and opinions solicited. A gift shop premium will be offered to participants. This small evaluation will confirm developer’s methods, or warn against ineffectual designs so that adjustments may be made.
The final summative evaluation will be a large study of visitors through structured interviews immediately upon exiting the exhibit. This evaluation will assess—based on pre-determined indicators—the engagement value of the exhibit. What did visitors find meaningful? Were they flipped? Did the IPOP scheme prove effective? It will be designed by a professional evaluator and be conducted by three to five staff members over three separate data collection periods (weekends, weekdays, seasonal/holiday hours.) The professional evaluator will collect, code, and analyze the data. Results will confirm, or question, the effectiveness of the IPOP [or whatever] scheme and make recommendations for future use. Results will confirm, or question, the effectiveness of various engagement or connection methods and make recommendations for future use. Results will also establish a baseline for assessment of successful and predictable standards of engagement, meaningfulness, or learning that may be compared to other exhibits at the museum. Finally, did the exhibit advance the museum’s interpretive goals?
This is a rough sketch and will be developed in conjunction with an actual professional evaluator.
See: Wood, Elizabeth. “Defining the Scope of Your Evaluation.” Journal of Museum Education 40(1): 13-19.
Diamond, Judy. Practical Evaluation Guide: Tools for Museums and Other Informal Educational Settings. Nashville: American Association for State and Local History, 2nd ed., 2009.
Nelson, Amy Grack, and Sarah Cohn. “Data Collection Methods for Evaluating Museum Programs and Exhibitions.” Journal of Museum Education 40(1): 27-36.
Trainer, Laureen. “Evaluation Resources to Help with the Next Step.” Journal of Museum Education 40(1): 86-91.