This is an Eval Central archive copy, find the original at evalacademy.com.
Understanding what you need to know. Designing your approach to finding the answers. Easy enough in theory, but real-world examples can show you have to apply theory to practice. That’s why we’re sharing this series on how we evaluated.
In this post, I describe how Three Hive Consulting supported the evaluation of a home health monitoring project for patients with chronic conditions.
The Project: Home Health Monitoring in Primary Care
This project rolled out in the summer of 2020, in the throes of the Covid-19 pandemic. At that time, there was enough experience of the pandemic to know that limiting in-person interactions was the key to reducing the spread. The world was working to find ways to deliver services remotely, and healthcare was no exception.
Patients with chronic conditions often require more frequent medical care and are at greater risk of developing severe complications or needing hospitalization with Covid-19 infection. Monitoring symptoms and delivering care virtually was an obvious need.
In this project, patients with chronic conditions were provided with free monitoring kits for 90 days. The kits included blood pressure cuffs, pulse oximeters, thermometers, scales, and tablets. Patients input their information into tablets loaded with a platform enabling the reporting of results to primary care nurses. Primary care nurses then checked the platform regularly for alerts for clinically significant results. They called the patients directly or consulted with family physicians when appropriate. Many concerns could be addressed by nurses over the phone, while others required the patient to attend either a virtual or in-person visit with their family physician.
This home health monitoring project aimed to:
Improve patients’ access to care
Decrease patients’ risk of exposure to Covid-19
Detect worsening symptoms earlier
Optimize physicians’ time
Improve patients’ ability to self-manage
Reduce hospital admissions and emergency department visit
Maintain or improve patients’ health-related quality of life
Developing the Evaluation Plan
This project involved many partners. Primary care in this region (central Alberta, Canada) is delivered by family physicians and supported by Primary Care Networks (PCNs). Three Central Zone PCNs received funding and in-kind resources from several partners, including Health City, Alberta Innovates, and Boehringer-Ingelheim. Each of these partners had input into the evaluation plan.
With limited resources available to evaluate the first phase of this potentially multi-year project, partners agreed to four focus areas:
1. Project reach (using document review and administrative data)
Did the project achieve its aim of enrolling at least 30 patients?
Who are the participating patients?
What challenges were encountered in patient recruitment?
What was helpful or successful in patient recruitment?
Was PCN and provider participation maintained through Phase 1?
2. Provider experience (using surveys at two points in time)
What challenges did providers encounter in working with the HHM model?
What worked well for providers?
3. Patient experience (using surveys at program discharge)
What challenges did patients encounter in working with the HHM model?
What worked well for patients?
What suggestions did patients have for improvement?
4. Patient health outcomes (using the EQ-5D-5L at intake and discharge, patient surveys at discharge, and clinical data)
How, if at all, did patient-reported quality of life change over the duration of the project?
To what extent were clinically significant results identified through the HHM platform?
How quickly were patient results reviewed?
To what extent did patients utilize other health services during their participation in the project?
We designed the evaluation plan to minimize the workload on busy providers and take advantage of information already documented for clinical care; asking nurses or physicians to track additional data elements was not feasible, particularly during the pandemic. Patient health outcome data was limited to what patients could reliably report themselves and the clinical information captured in the home health monitoring platform.
The evaluation plan included mid-term reporting of provider experiences and final reporting of all available data. The evaluation findings would need to provide information to inform decisions about spread and scale, as well as point to a more robust evaluation approach supporting the expansion of this model of care after the initial phase.
“It’s safe to say this is a new way of doing business,” says Central Zone PCN Committee Operations Lead Jodi Thesenvitz. “We need to see if this is going to be a legitimate model of doing work going forward.”
In a year of shifting priorities, collecting data went fairly well but did not yield the sample sizes we intended. All data collection was undertaken online to ensure safety and convenience, but we struggled with uptake.
Invitations and links to complete the EQ-5D-5L and patient experience survey were embedded within the home health monitoring platform, and so were seen by all participating patients. Of the 37 patients enrolled, only 11 completed the EQ-5D-5L at both intake and discharge, leaving us with results that were not entirely reliable. Similarly, only 17 completed the patient experience survey at discharge.
Low response rates are always a cue for evaluators to re-examine their recruitment processes. Upon reflection, the language we used in the home health monitoring platform to invite patients to complete both survey tools was not as compelling as it could be. In our subsequent second phase, we’ve adjusted the invite to provide more information about the importance of patient feedback and used plainer language to describe the EQ-5D-5L—it’s a very simple tool with a rather technical name.
We’ve also increased engagement with providers, letting them know early about the role of evaluation in the project and their role in supporting that evaluation.
The evaluation findings have been already been helpful to the project partners. “It helped validate the expansion of the initiative,” according to Health City CEO Reg Joseph. “One of our key goals is to scale the initiatives that make sense to scale, in that they drive health adoption of innovation, improve health outcomes and drive economic growth in the health sector. To do this often requires policy change that itself requires evaluation and data.”
The findings have been helpful for PCNs and physicians considering joining the second phase of the project. The evaluation report was “used as a communication tool,” said Thesenvitz. “It was a source of evidence for those considering participation in the next wave. It legitimized and explained the effort.”
The second phase of the project allows us to expand our evaluation approach. With a larger group of patients and access to system-level health data such as emergency department visits, hospital admissions, and physician visits, we will be able to build the body of evidence needed to make decisions not just at the operational level, but at policy and funding levels, too.
We hope this example has helped illustrate what evaluation of a virtual health initiative, or even just program evaluation in general, looks like in practice. For another tool to apply evaluation theory to practice, see our free resource on applying the JCSEE Program Evaluation Standards in practice.
We respect your privacy.