This is an Eval Central archive copy, find the original at communityevaluationsolutions.com.
Today I am joined by Jenn Ballentine of Highland Nonprofit Consulting to talk about, what else, evaluation in the time of Covid-19. Granted, my last blog was a bit of a rant, so today, I would like to strike a more positive and helpful tone.
To tell you the truth, some of the conversation around data collection during the pandemic has me a little squirmy because it has felt kind of opportunistic. I don’t think rushing out to survey people when they are really worried and anxious feels helpful, or frankly ethical.
But we are evaluators, so we do believe evaluation is important and we just can’t stop doing what we do. I am a community psychologist and Jenn, a public health professional. We believe in a public health approach to prevention and in systems-level change. And if anything, this pandemic should teach us is that we are all connected. Systems-level change is needed now more than ever to correct the inequities in our society so evident in disproportionate impact of COVID-19 in communities of color.
Adaptions in the Time of Crisis
Sanjeev Sridharan recently wrote a thoughtful and poignant piece called Adaptions and Nimbleness in the Time of Crisis: Some Questions for Evaluators. In it, he observes that both program implementers and evaluators must now think about how to adapt. He raises a set of questions for evaluators to consider, and I urge you to read the article for yourself.
Today we would like to address the nonprofit and program implementers and provide some practical and feasible tips, inspired by some of the issues he raises.
Jenn and I are evaluating a federally-funded teen pregnancy prevention program that for the last year, has been implemented at 5 community-based centers for teenage boys and girls. I also am the evaluator for several Drug-free Coalitions and Alcohol substance Abuse Prevention Programs, all of whom have a school component. Jenn serves as the evaluator for school-based sexual health education programs facilitated by a statewide training and advocacy organization.
As to be expected, nearly all programming and thus, data collection stopped mid-March. This left Jenn and I wondering, what the heck we were going to evaluate beyond the data we already collected this year?
The technical assistance from funders for the most part, included four specific questions:
- What were your intended enrollment numbers and what are your actual numbers, and what is are the reasons for these differences?
- What is the status of your programming and how has that changed?
- How has data collection changed (number pretests/number posttests) and how were participants affected- e.g. missed content, sessions provided out of order, etc.).
- How will the program use Continuous Quality Improvement (CQI) strategies to document and learn from the events?
What is missing is the story here, as Sridharan points out in his first question: “What are exemplars of good evaluation stories related to the adaptiveness/nimbleness of specific interventions.” Yes, we need to understand changes regarding what was planned versus what was done, but we need the why, in order to tell the entire story. When did they have to close their doors and why did they make that decision? What happened to staff and why? And as a result of the situation, were program staff able to pivot and if so, in what way? For example, did program staff decide to shift from in-person to online delivery?
For one of my clients, they have shifted, rather nimbly I might say, to online meetings with their youth advisory committee. They are taking notes about their discussions and developing interventions that they can do online via social media. Similarly, another client that trains health and physical education teachers to implement comprehensive sex education offered to facilitate virtual lessons for one new district in an effort to ensure that students received this valuable information.
Another of Sridharan’s questions is “Are there examples of evaluations that have taken a developmental approach to enhance the coordination at this time of the crisis?” He observes that the “pandemic has highlighted the need to better understand the connections between the intervention and its underlying systemic contexts/supportive structures.” During a time of crisis, coordination can be improved, perhaps accelerated, or could also break down altogether.
Some school systems for example, have enlisted bus drivers, community volunteers and even local law enforcement to deliver food to students eligible through the National School Lunch Program. Some have expanded food distribution beyond those eligible through these federal programs. Other school systems have maintained the status quo, requiring families and guardians to drive to school with the eligible children present to collect the food distribution. Those without transportation, or without a large enough car to transport the whole family, were out of luck.
There are a lot more gems to unpack (like the dynamics of vulnerability), but we will end with this question posed by Sridharan: “Can a focus on a minimal set of components needed to produce change help enhance a focus on meeting the needs of the disadvantaged given limited resources?”
For our teen pregnancy prevention program, we can’t even imagine where to start on a minimal set of components for this implementation fidelity evaluation. How do you deliver an evidenced-based, comprehensive teen pregnancy prevention program virtually? Is it even ethical to do so with parents, siblings in the next or same room? What about students without internet access, laptops or other devices or when access to devices must be shared by multiple youth?
We are pretty sure that six months from now, funders will be asking nonprofits what happened? What is the program implementer to do? We think it’s critically important to document the changes programs made and the various ways in which the disruption impacted their organization and the people they serve. But program staff are busy people, especially during times of crisis. Evaluators can help the nonprofits they serve by helping staff document the changes they made and why they made them. Evaluators need to stress how this information will be useful when reporting to funders, partners, board members and others. The learning that comes from this process can help the organization plan for future disruptions.
We developed a guide to help in this process. Just let me know you want the guide and I will send it to you. Depending the needs of your client and their situation, these questions can be adapted in a variety of ways. You might want to change the order, eliminate some questions and add others. Do let us know what you think and if you find it useful. Stay safe and well!