This is an Eval Central archive copy, find the original at drbethsnow.com.
The next domain of competence is technical practice.
2. Technical Practice competencies focus on the strategic, methodological, and interpretive decisions required to conduct an evaluation.
And the first competency in this domain is:
2.1 Clarifies the purpose and scope of the evaluation.
Let’s begin at the beginning. That may seem trite, but I think it’s such a common saying because so often, people want to start somewhere other than the beginning. I cannot tell you how many times I’ve been consulted about an evaluation and the person seeking my advice starts with something like:
- I have a set of indicators and I need to do an evaluation using them.
- I want to do an evaluation of my program but I can’t figure out how to make it into a randomized controlled trial (RCT) because the program is already run.
- I need your help to create a survey to evaluate my program.
- I need to do a developmental evaluation [or whatever the latest trend in evaluation is at the time] of my program.
These are all examples of not beginning at the beginning. Many people seem to think that an evaluation requires a specific method (e.g., a survey) or a specific design (e.g., an RCT). Or they think that whatever the latest trend in evaluation must be the best approach, because it’s new. Or they have data already and they want to use it 1I just noticed that I‘ve written about this before, more than 4 years ago! Past Beth would be sad to hear that I’m still experiencing this!. But where an evaluation needs to start is with its purpose. Why, exactly, do you want an evaluation? What will you use the findings of the evaluation for? These are the types of questions that I will ask (usually preceded by me saying “Let’s back up a second!”). Because the purpose of the evaluation will guide the choice of approach, design, and methods. For example, if you are interested in an evaluation that will help you to determine to what extent you’ve achieved your goals, and none of your current indicators relate to your goals, then starting with “I have a set of indicators and I need to do an evaluation using them” is not going to get you where you want to be. Similarly, if you want an evaluation that will help surface unanticipated consequences (and I tend to think that evaluations should usually be on the look out for them), then having a set of pre-defined indicators is not going to be what you need (after all, to create an indicator, you have to have been anticipating that it might be affected by the program!). If the purpose of your evaluation is not a developmental one, then developmental evaluation might not be the best approach for you. So clarifying the purpose (or purposes) of an evaluation is something that I do at the start of every evaluation – and something that I check in on during the evaluation, both to see if what we are doing in the evaluation is helping to meet its purpose and to see if the purpose changes (or new purposes emerge) along the way.
Clarifying the scope of an evaluation is also really important, and something that I struggle with. I’m am an infinitely curious person and I want to know all the things! But there just isn’t enough time and resources to look at every possible thing in any given evaluation, so it’s important to be able to clarify what the scope of any given project is. Like purpose, it’s important to clarify the scope of the evaluation with your client at the start, and to keep tabs on it throughout the evaluation. If you don’t have a clear scope, it’s very easy to fall into the trap of the dreaded “scope creep” – where extra things get added to the project that weren’t initially agreed to and then either the costs go up – or the timeline gets extended. It’s not to say that the scope can’t change during an evaluation, but just that any changes to scope should be done mindfully and in agreement between the client and the evaluator.
Working in a large organization like I do, I also find it useful to understand the scope of other departments that do similar work to evaluation (like quality improvement and performance management). This is helpful in ensuring that we aren’t duplicating efforts of other teams, and also that we aren’t stepping on anyone else’s toes. Also, I’ve had the experience of taking on work that really should have been done by another team (i.e., the dreaded scope creep!) and had we not figured this out by clarifying scope, it would have really impaired our ability to deliver on the work that we needed to deliver on.
My team and I have done some work on clarifying what the scope of evaluation is relative to these other groups and I was about to say “and that’s a topic for another blog posting”, but then I remembered that I’m presenting a webinar (based on a conference presentation I gave last year) on that in a couple of weeks! So here’s my shameless plug: if you want to hear me pontificate on the similarities, differences, and overlaps between evaluation and other related approaches to assessing programs and services, register for my webinar, hosted by the Canadian Evaluation Society’s BC Chapter on Friday, September 13 (that’s right, Friday the 13th!) at 12 pm Pacific Time.
Footnotes [ + ]
1. | ↑ | I just noticed that I‘ve written about this before, more than 4 years ago! Past Beth would be sad to hear that I’m still experiencing this! |