This is an Eval Central archive copy, find the original at drbethsnow.com.
My notes from the third and final part of the American Evaluation Association (AEA) eStudy course being facilitated by Jonny Morell.
- Jonny answered my question from last week:what is the difference between conceptual use and metaphorical use
- there are fuzzy boundaries
- if you think about chaotic systems – “strange attractors” (a.k.a., chaotic attractors)
- you can do the math to plot a fractal – that is a technical meaning of the word
- a conceptual meaning of the word – I know it’s not random, but I can’t tell you from one instance to the next where it will be, but I can tell you where it won’t be. You aren’t using the mathematics, but you are using the concept. Conceptual is still grounded in the technical.
- metaphorical use – a step further away – we have this concept of chaos – that means its unpredictable. Conceptual means you have to “stay close to the mathematical, without doing the math”.
- he thinks that if you take complex behavoiur seriously, you’ll do better program design and evaluation
- but not trying to convert everyone to complexity thinking for everything all the time
Unintended Consequences
- he tends to think that unintended consequences are usually negative – any change perturbs a system, and even if some parts of a system aren’t working, it will mess up a system; it’s harder to make things work than to make things not work, so if you perturb a system, it’s more likely that bad things will come out of it
- he’s heard this from many people with “broad and deep experience” whose work he respects
- “Programs like to optimize highly correlated outcomes within a system. This is likely to result in problems as other parts of the system adapt to change.
- Change perturbs systems. Functioning systems require many parts of “fit”, but only a few to cause dysfunction”
- he recently read about some work that shows this might not be true! But he wants to read more about it.
- there are always unintended consequences – and if they are good or bad is an important question!
- examples of unintended consequences provided by an audience member. A medical school started at a northern university to promote more physicians to work in the north, but saw unanticipated consequences:
- positive: changes in the community (e.g., more rentals, excitement in the community about the work being done at the university, culture of the community changed: a symphony was started in the community)
- negative: other programs felt snubbed
- Jonny wrote a book a while ago about how to evaluate unintended consequences (Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable)
Small Changes
- “because of sensitive dependence, it may be impossible to specify an outcome chain”
- e.g., sometimes programs evolve because of small things – e.g., because the program had time to do something that wasn’t in the original scope, or because the board agreed that something that wasn’t originally in scope still fit within the mandate
Unpredictability
A neat example of how difficult it is to predict the future is shown in this letter from Rumsfeld to G.W. Bush.
- “the commonly accepted view of logic models and program theory may be less and less correct as time goes on”
- there is debate over whether there are “degrees of complexity” (or if something is either complex or it is not”
- some think that even if you start with a simple system that can be reasonably represented by a logic model, over time it will transition to complexity behaviour (he doesn’t believe there are “degrees” of complexity, so it’s not that a simple system smoothly transitions to a complex one
Network Effects Among Programs
- imagine you have one universe where:
- two programs: one on malaria prevention and another one that is promoting girls education –> increased civic skills
- and another universe where:
- you have those two programs, but also other programs with goals around crop yields and road building = and all the programs interact with each other. E.g., if people are healthier (no malaria) and well fed (crop yield), you can work harder and increase economic development, which can feed back into the other programs, etc.
- he thinks that this interconnected universe can have bigger effects over time
- effectiveness can build over time with networked programs (whereas non-networked programs would just have the effect of the program and that’s it)
- challenge: how do you evaluate this when programs (and evaluations) are generally funded for single programs (or at least within a single organization), but not across multiple programs in different areas
- but there can be some programs that can spur change in all kinds of other areas of the system (e.g., ensuring everyone has a base level of education could –> increased civic engagement, increased health, increased economic development, etc.)
Joint Optimization of Unrelated Outcomes
- e.g., a program to try to decrease incidence and prevalence of HIV/AIDS
- increase service –> decrease incidence and prevalence of HIV/AIDS
- increase quality of service –> decrease incidence and prevalence of HIV/AIDS
- decrease incidence and prevalence of HIV/AIDS –> better qualitity of life
- this is a fine program model
- all these outcomes are correlated
- you pour a lot of money into this program – lots of people make career choices, intellectual capital goes there
- so what happens to other things in the system?
- less people, money, etc. to go to women’s health, other health services
- so perhaps we see improvements in HIV/AIDS outcomes, but then you see worse outcomes in other areas of health
- so instead of doing that, let’s jointly optimize unrelated outcomes
- e.g., instead of trying to optimize just HIV/AIDS outcomes, but try to optimize health overall
- of course, this is hard to convince people of this – how do you decide how much each different group gets
- another example, you can drill people on reading to get them to do well on a test, but what if that makes them hate reading? Try to optimize that they do well enough on reading but also love reading
- have you ever seen HUGE logic models – lots of elements and lots of arrows?
- when you look at these, do you really think they are going to be correct? there’s lots of stuff that we don’t really know for sure; there are feedback loops that may or may not be true (feedback loops do tend to
- famous picture of dealing with insurgent situation in Afghanistan – you look at it and think that it can’t possible be right on its whole – things like sensitive dependence, emergence, non-linear effects of feedback loops, etc., etc. aren’t accounted for here
- it’s OK to have these big complex models, but it’s not OK to think that the whole model is true (even if you have data on every arrow within the model – because it doesn’t account for howcomplex systems behave). You can use the big model to look at pieces of it and think about how they relate to other parts of the model
- he has a blog posting on “a pitch for sparse models” – if things happen in the “input” and “activity” realm, things will happen in the outputs/outcomes realm
- he thinks that people can’t really specify the relationships in the level of detail that we usually see in big logic models (and he thinks it’s egotistical to think that we can do that).
- but it’s not very satisfying to stakeholders to say “we can’t tell you anything about intermediate outcomes”
- evaluators are complicit – we make these big models and stakeholders like it (and he says he is as guilty as anyone else at doing this)
Attractors
- if you push something out of place and there is an attractor present, it will go back
- e.g., rain that falls all ends up in the river, push a pendulum and ultimately it will end up back in the middle, planetary motion – gravity holds planets in their orbits, kids like playgrounds – kids will end up there, animals go to the waterhole
- “explains why causal paths can vary but outcomes remain constant”
- attractors are useful because:
- lets you conceptualize change in terms of shape and stability
- insight about program behaviour outside of stakeholder beliefs
- promotes technological perspective: what will happen, not why
How do you decide if you should use complexity thinking in a given evaluation?
- more work to incorporate complexity into an evaluation (than, for example, basing an evaluation on a simple logic model)
- the evaluator – and the evaluation customer – should think about whether the value that is added by doing so is worth the extra work
For Further Reading
Jonny provided an extensive reading list. Here are some that caught my eye and I’m planning to check out:
- Gates, E. F. (2016). Making sense of the emerging conversation in evaluation about systems thinking and complexity science. Evaluation and Program Planning, 59, 62-73 (PubMed)
- Lawlor, J. A., & McGirr, S. (2017). Agent-based modeling as a tool for program design and evaluation. Evaluation and Program Planning. 65:131-138 (PubMed)
- Morell, J. A. (2010). Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable New York: Guilford.
- Morell, J. A. (2019). Revealing Implicit Assumptions: Why, Where, and How? https://www.crs.org/sites/default/files/report_revealing_assumptions.pdf
- Walton, M. (2016). Expert views on applying complexity theory in evaluation: Opportunities and barriers. Evaluation, (Sage)
- Williams, B., & Imam, I. (2007). Systems Concepts in Evaluation. Point Reyes, CA: EdgePress of Inverness. (online pdf)
Image Sources
- Blue ropes photo posted on Flickr by Joe Lodge with a Creative Commons license.
- Green and yellow tubes photo posted on Flickr by alex de carvalho with a Creative Commons license.
- Spiky thing photo posted on Flickr by Manel Torralba with a Creative Commons license.