This is an Eval Central archive copy, find the original at evalacademy.com.
Defining evaluation purpose. Writing evaluation questions. Deploying data collection tools. These topics can all seem abstract on their own. To put the pieces in context, we’re offering this series on how we evaluated to show you what real-world evaluation looks like in practice.
This post explores how we at Three Hive Consulting worked with REACH Edmonton Council and other agencies to evaluate a unique initiative called Bridging Together. You’ll see how they developed and carried out an evaluation plan that yielded actionable information.
The initiative: Bridging Together
With funding from Immigration, Refugees and Citizenship Canada, REACH Edmonton Council acted as the backbone organization for this collective of youth-serving non-profits. Each of the partner organizations already offered programming for immigrant and refugee youth outside of school hours. Their after-school and summer programs varied in focus, but common elements included academics, sports, life skills, culture and recreation. These partner organizations met regularly to share resources, discuss common problems and share solutions, and REACH arranged for relevant training opportunities.
Intended outcomes
Bridging Together aimed to enhance outcomes for immigrant and refugee children and youth, their families, and the partner organizations.
Immigrants and refugees face many well-documented challenges when arriving in Canada, not limited to linguistic, cultural and environmental differences, physical and mental health, socialization, education and justice. While many reveal resilience and integrate well into Canadian society, a significant number do not fare so well. Through out-of-school time programming, partner organizations intended to help children and youth develop healthy relationships, improve self-efficacy, become involved in community, improve academic performance, and perhaps most importantly, have fun.
Developing the evaluation plan
Convening thirteen organizations to work toward a common goal is no small task. Having them agree on intended outcomes and evaluation processes was a smoother process than expected. We held a large group session to begin defining evaluation purpose, use and focus areas. From this meeting, we drafted four focus areas and posed several questions to attendees:
-
What would you like to know about your program?
-
What has worked before with evaluations you have been involved in?
-
What is your one piece of advice for how to make this a successful evaluation?
-
What difference should we see in a child or youth after participating in your program?
This stakeholder engagement process showed a need for a data collection approach that acknowledged commonalities while accommodating the uniqueness of different programs. We confirmed four common focus areas:
-
Program description and participation
-
Child, youth and family outcomes
-
Collaboration
-
Social return on investment
The social return on investment (SROI) was non-negotiable requirement. It is not a method we would have suggested, but as evaluators we know that sometimes we just have to do what we’re told. We’ll reflect on that SROI below.
Partners reviewed and made suggestions on draft versions of the evaluation plan until we arrived at a final version to guide the next two years.
Adapting data collection approaches
We mentioned above that it was important to partners that the evaluation reflected their individual programs. There was quite a bit of variation to address; one organization delivered their programming entirely in French, one provided free sports leagues for children in grades four through six, while others delivered more of a “homework club” program. Some organizations offered multiple programs through Bridging Together. Participant ages ranged from six to 24. In the first year, 390 children and youth participated.
Our methods, obviously, needed to accommodate different program activities, different languages, different reading levels, and very different logistics. So here’s what we did:
-
Interactive, arts-based feedback sessions with youth in summer programs
-
Program experience surveys for older children and youth
-
Self-efficacy surveys for older children and youth
-
Video-recorded, small group interviews with children at sports leagues
-
Parent/caregiver program experience surveys
-
Interviews with organization staff
-
Administrative data analysis
-
Social return on investments, requiring detailed funding and spending information from all organizations
Project ethics
We’re big fans of ARECCI, a project ethics review process we can access in Alberta. We made sure to include an ARECCI project ethics review in our proposal to REACH, and incorporated their suggestions into our processes.
Collecting data
We expected challenges in implementing the nine approaches above. In our monthly status updates, we tracked what we had done, what we planned to do next, what risks emerged and how we were mitigating them.
Completing the summer feedback sessions required some support from sub-contractors. Our plan was to schedule these sessions, where we would also support the survey administration for older children and youth, as close to the end of their summer program as possible. Not surprisingly, many programs ended in the same week, so deploying evaluation assistants to all sites was tricky but we were able to accommodate those programs that agreed to participate.
Collecting data from this many sites also required support from program staff and volunteers. Contacting some organizations was easy; others’ capacity was so stretched that returning phone calls and emails did not always happen. Most were quite willing to support survey administration, with guidance provided. We did find, though, that sometimes younger children were completing surveys intended for older children and youth.
Getting parents and caregivers to complete surveys was challenging for some programs, and smooth for others. To make it easier for parents and caregivers to complete surveys, we provided both an online option and paper surveys, and kept the survey as short as possible while collecting the meaningful data we needed. Overall, our sample size for parents and caregivers was lower than we had hoped for—that’s a challenge many working in non-profit evaluation will be familiar with.
The SROI calculation required detailed information about program inputs and spending. Most partner organizations were running multiple programs, some of which had funding from the same sources. Many programs also relied on funding from other sources, volunteers, and subsidized facility rentals. We were fortunate to have support from REACH to create a spreadsheet for organizations to identify all financial and in-kind resources needed to run their Bridging Together program and all associated spending. Completing that spreadsheet represented a great deal of time for partner organizations.
Sharing findings
We produced a few different reports throughout this contract. The major products were comprehensive written reports for Year 1 and Year 2. Each yearly report addressed the first two focus areas, program reach and outcomes, and one additional focus area. Following the preparation of the draft reports, we attended meetings with partners to review findings and gather their perspectives and suggestions.
These comprehensive reports addressed Bridging Together as a whole, but we also wanted to provide individual organizations with results that they could use to inform program changes, organizational reporting and further advocacy. We therefore provided short summaries of results for each partner organizations.
Informing our practice
As evaluators, we learn from every project we undertake. The Bridging Together project spanned two years and showed us the importance of strong working relationships with clients and stakeholders. This project showed us how valuable a convener or coordinator is in collective impact projects—we would have needed to invest more resources in project management if REACH had not so capably undertaken that role.
This project also demonstrated how vital data management practices are when working with multiple sites across multiple timepoints. A good spreadsheet or other tool to track which data has been received from which site supports sounds project management.
We’ve always been pretty flexible, but this project reinforced how important is to be able to adapt processes to fit different contexts. For example, our youth feedback sessions looked different across sites. In some, we used classrooms with structured space; in others, we set up in a hallway and had children and youth move through a sort of drawing and writing gauntlet. One method, the mini-interview, was used for just one program because it was simply the only feasible way to collect data from busy kids running on and off the field. Seeing how this variation in methods led to a richer knowledge product has reinforced for us that adaptability is key in real-world evaluation.
And finally, the SROI. The calculation showed that for every dollar invested, Bridging Together created at least $3.30 in returned social value. This figure is powerful in reporting and future funding applications. Obtaining the data to inform this calculation was A LOT of work for partner organizations. Many organizations’ accounting systems were not set up to track costs for individual programs; the work required to set up overhead calculations and other bookkeeping details for many different programs cannot often be accommodated through non-profit administrative allocations. We have always viewed this method with skepticism, and questioned the need for it at all. The value of improving outcomes for children and youth has been well documented. We already know that investing in children saves money later. We approached this project with the view that requiring resource-limited programs to undertake this complex and imprecise calculation is an undue burden and does not yield new findings; that view hasn’t changed.
Client perspective
How has this evaluation been applied at REACH? Evaluation use is a topic many contracted evaluators wonder about. Is the report just living on a server somewhere, never to be consulted again? Or have the findings and recommendations been used to drive program changes, to advocate for funding, to share a story of impact?
Overall, REACH is dedicated to evaluating its work. “We know that nothing is perfect and evaluation results help to inform the project as it unfolds and influences decisions,” notes Project Manager Lisa Kardosh. “When I’m focused on the day-to-day details, it’s easy to forget how many lives the collaborative is reaching. For me, evaluation helps to keep things in perspective.”
For REACH, early results were useful in ongoing planning. “The interim report gave the collaborative a chance to assess if we were on the right track or not, and thankfully for the most part we were,” says Kardosh. “One benefit of the report was that it helped to shed light on gaps that were popping up, like more training being needed, so we could address it.”
Interim reporting also yielded an early opportunity to demonstrate the value of the program to the funder. “It was useful to share the Year 1 results with our funder so that they could see that their investment was making a difference.”
REACH and the Bridging Together partners have used the final evaluation report for advocacy and communication. “We’ve shared the Year 2 results quite broadly among our networks,” says Kardosh. “Having an unbiased third-party report to show our success is so important to be able to justify the worth of this collaborative to the funder we had, as well as potential future funders.”
Watch for more in our How We Evaluated series.
Sign up for our newsletter
We’ll let you know about our new content, and curate the best new evaluation resources from around the web!
We respect your privacy.
Thank you!