This is an Eval Central archive copy, find the original at freshspectrum.com.
This is a series of posts providing quick overviews of important topics in research and evaluation. Each post in this series will include at least 3 cartoons from my archives and at least 3 links to recommended resources. I only give quotes here and recommend that you follow the links below each quote for more detailed information.
An impact evaluation provides information about the impacts produced by an intervention – positive and negative, intended and unintended, direct and indirect. This means that an impact evaluation must establish what has been the cause of observed changes (in this case ‘impacts’) referred to as causal attribution (also referred to as causal inference).
If an impact evaluation fails to systematically undertake causal attribution, there is a greater risk that the evaluation will produce incorrect findings and lead to incorrect decisions. For example, deciding to scale up when the programme is actually ineffective or effective only in certain limited situations, or deciding to exit when a programme could be made to work if limiting factors were addressed.
An impact evaluation should only be undertaken when its intended use can be clearly identified and when it is likely to be able to produce useful findings, taking into account the availability of resources and the timing of decisions about the programme or policy under investigation. A formal evaluability assessment (EA) might first need to be conducted to assess these aspects.
Formative impact evaluations are undertaken to inform decisions in regard to making changes to a programme or policy. While many formative evaluations focus on processes, impact evaluations can be used formatively if an intervention is ongoing. For example, the findings of an impact evaluation can be used to improve implementation of a programme for the next intake of participants.
Summative impact evaluations are undertaken to inform decisions about whether to continue, discontinue, replicate or scale up an intervention. Ideally, a summative impact evaluation not only produces findings about ‘what works’ but also provides information about what is needed to make the intervention work for different groups in different settings, which can then be used to inform decisions.
A key question in the assessment of programmes and projects is that of attribution: to what extent are observed results due to programme activities rather than other factors? What we want to know is whether or not the programme has made a difference—whether or not it has added value. Experimental or quasi-experimental designs that might answer these questions are often not feasible or not practical. In such cases, contribution analysis can help managers come to reasonably robust conclusions about the contribution being made by programmes to observed results.
Contribution analysis explores attribution through assessing the contribution a programme is making to observed results. It sets out to verify the theory of change behind a programme and, at the same time, takes into consideration other influencing factors. Causality is inferred from the following evidence:
1. The programme is based on a reasoned theory of change: the assumptions behind why the program is expected to work are sound, are plausible, and are agreed upon by at least some of the key players.
2. The activities of the programme were implemented.
3. The theory of change is verified by evidence: the chain of expected results occurred.
4. Other factors influencing the programme were assessed and were either shown not to have made a significant contribution or, if they did, the relative contribution was recognised.
Contribution analysis is useful in situations where the programme is not experimental—there is little or no scope for varying how the program is implemented—and the programme has been funded on the basis of a theory of change. Many managers and 4 evaluators assessing the performance of programmes face this situation. Kotvojs (2006) describes one way of using contribution analysis in a development context, “as a means to consider progress towards outputs and intermediate and end outcomes” (p. 1).
John Mayne’s 2008 ILAC Brief: Contribution analysis: An approach to exploring cause and effect