This is an Eval Central archive copy, find the original at communityevaluationsolutions.com.
The AEA Summer Institute: A First-Timer’s Impression Daniel Snook is currently working at Community Evaluation Solutions doing a practicum in program evaluation.
My first trip to the American Evaluation Association’s (AEA) summer institute was eye-opening. I was aware of evaluation as a practice and as a useful tool for program development and improvement before I arrived, but I was not aware of the full breadth of Evaluation with a capital ‘E’ (i.e., evaluation as a field). Suffice to say, I now know just how much I don’t know about the incredibly multi-faceted field of evaluation.
My background is in psychology, specifically I’m a PhD student studying Community Psychology at Georgia State University. At the beginning of the conference I was feeling a bit like an incognito psychologist—I didn’t want anyone to realize that I hadn’t been doing evaluation work for years and that I didn’t identify per se as an ‘Evaluator’, at least, not yet. Community Psychology is essentially the study of how communities impact individual, and most programs (including my own) provide some training in social program evaluation. However, learning about a suite of techniques in theory and learning about them in practice are very different things. Occasionally the academic world and the real world come crashing together– that was my experience at the AEA summer institute. It was simultaneously awesome and discomfiting, and it was a great place to learn a lot very quickly! Here are some highlights of what I learned:
1. Theory matters– no matter how applied your work is.
I came into the first session I attended, Program Theory, led by Dr. Stuart Donaldson thinking I would find myself in comfortable (read: ‘academic’) territory. One of the first activities we did challenged that idea almost immediately. Dr. Donaldson asked each of the groups of audience members to evaluate the room in which we were sitting. Each table and its members were then asked to assume roles, for instance, as teams of interior decorators, information technology specialists, or, in our case, firefighters. The number of different ideas about what makes a room ‘good’ or ‘successful’ from these various perspectives was staggering. As you might expect, the lesson became quite clear: your approach to and expectations of a situation (i.e. theory) significantly impacts your practice, even if you don’t explicitly realize it.
2. Avoid “The Curse of Knowledge.”
In her keynote presentation, veteran evaluator Kylie Hutchinson described some of the basics of effectively communicating evaluation results to stakeholders. A prime mistake, she says, that presenters make, whether in evaluation or otherwise, is assuming their audience knows what they know. This is, of course, more of an implicit than explicit assumption; evaluators are consciously aware that their clients do not know every detail of the evaluation, yet their presentations often don’t reflect that. Understanding your audience’s starting point by doing things as simple as laying off the jargon or explaining acronyms can keep your audience from zoning out or rolling their eyes. It’s also tempting to include every detail about your evaluation just because each detail is important to you. But your audience hasn’t been working on the evaluation at that level of detail, so spare them all of your knowledge and focus only on what you know is important to them. If you can say it more simply, then do so.
3. Good presentations are good; bad presentations are terrible.
The AEA summer institute left me with the distinct impression that evaluators are very good presenters. They’re practically oriented, which means they want to get to the bottom of what makes a program successful (or unsuccessful) and then get to the point in telling the client about it. Evaluators also have a very high bar for data visualization, tools for incorporating theory into practice (e.g., logic models), and techniques for transforming vague goals into tangible and measurable specifics. However, I also learned a bit about how NOT to present. I’ve mentioned jargon once already, but I have to reiterate that purposefully using jargon, whether it’s to obscure the fact that you’ve got nothing substantive to say or to create a veneer of professionalism, wastes everyone’s time. Finally, to be frank, not all content is worth presenting. If you’re on the fence about whether the ideas in your presentation are worth sharing, they’re probably not.
4. Invest in your evaluations early.
The readiness is all, and that holds especially true for evaluation. Several of the sessions I attended discussed the importance of being prepared for your evaluation from start to finish to set yourself up for success. Sheila Robinson’s excellent session on strategies for evaluation planning encouraged me to ask the right questions– the why, what, and how of conducting an evaluation– early on (i.e., before beginning an evaluation). In other sessions, presenters made it clear that whether you’re using pilot studies or cognitive interviewing, it’s well worth your time to explore your proposed theory of change, your indicators, and whatever else you can before they’re set in stone. Acting with intentionality at the outset of an evaluation pays big dividends when it’s time to present findings.
5. Always mix your methods.
One thing that I already knew as a researcher, but that was reinforced powerfully at the AEA summer institute, is that it’s always best to use both qualitative and quantitative methods of measurement. Listening to speaker after speaker, it became increasingly apparent that quantitative or qualitative measures alone are simply inadequate for telling the whole truth. Quantitative methods are critical for providing the evidence in ‘evidence-based’. That’s not to say qualitative data cannot be considered evidence, but it is by its very nature more subjective, and requires ‘quantifying’ to become less open to interpretation. Quantitative data will satisfy the number crunchers in the room as well as make your findings look and feel more robust. But it isn’t quite enough. Qualitative data brings the ‘human’ element to the human sciences in ways that quantitative data cannot, because it enables researchers to (often literally) take the perspective of the participant. That makes for insightful narratives and stories that not only bring the results of an evaluation to life, but also give a human voice and face to the ‘hard’ data that’s been brought to the table. At the AEA summer institute, all the best presentations of evaluation results discussed both quantitative and qualitative elements.