This is an Eval Central archive copy, find the original at camman-evaluation.com.
I promised Jara I wouldn’t overthink this one and just get this off Twitter and into a blog post, and I’m really trying hard to live up to that. (I might have overthought “not overthinking it” though.) I also know that with how fast things are moving now, there’s a balance to be struck between deliberateness and irrelevance. The post I’m writing now isn’t the one I would have written last week, nor is it the one I’d write a month from now. It’s the one I’m writing in this moment, sitting on my couch in my apartment where I’ve been mostly alone for the two and a half weeks, since the global pandemic came crashing down on these shores.
It’s an interesting time to be an evaluator. There’s a real and present (and needed?) challenge to the relevance of our work. I was speaking to a friend the other day who is working with teams to develop logic models for their programs. I asked her if they were including COVID-19 in their logic models and she said yes, under the “assumptions” section (i.e., “things we assume to be true about our context that would affect our program). I asked her if they were including potential global disasters like COVID-19 but whose manifestations are as yet unknown. She said no, they were just trying to wrap their heads around what was already happening.
Honestly, can’t blame them. I’m trying to do that too. Program logic models are about diagramming knowns, not unknowns. They are representations of what we believe is happening and why. But they hinge on the assumption that the relevant causality of our contexts is predictable, describable, and repeatable, and that’s just not always true. And, worse than untrue, it’s often not useful. Evaluating in complexity means evaluating in uncertainty. In a space where we fundamentally can’t be sure exactly what is happening, why, or what will happen next. Where knowing what has already happened does not give us predictive confidence over the future. This is the nature of complexity—irreducible uncertainty. Assumptions of certainty don’t apply.
Uncertainty can give rise to anxiety and fear, but also surprise and delight. We wouldn’t gather with friends if we knew exactly what was going to happen each time, what conversations would emerge, what unexpected jokes would make us laugh, what poignant turns there might be. We gather with friends to be immersed in the complexity of human dynamics, confident in our collective ability to navigate whatever happens in service of greater joy and connection. We look forward to what is unknown.
Certainty and uncertainty also came in our recent Eval Cafe episode with Nora Murphy Johnson and Andy Johnson of Inspire to Change, and Chris Corrigan of Harvest Moon Consultants. We talked about how evaluators are often hired and expected to bring certainty, but in actuality when we are at our best, we bring more uncertainty. We bring it with the question that we ask, with the invitation to look more deeply into what is being done, why, and to what effect. Our role then is not to eliminate the uncertainty, but to accompany people well as they move into and through it.
Why is it so hard though? That’s a question I ask a lot. I don’t believe it’s just that uncertainty and complexity is inherently a more difficult place to work in. It has challenges, but it also has a rhythm and principles. “Human beings are built for complexity” (a phrase I can attribute to Chris). We navigate complexity all the time—our lives are improvisations from the moment we wake up to the moment we fall asleep each day. We raise children. We form societies. We host gatherings. We learn and create languages. We make art. We are literally adapted for these things.
So why is it so hard to work this way? Why is it that folks who teach developmental evaluation (which is just a form of evaluating according to the principles of complexity) have to warn us (and truthfully) that we will need to constantly be helping our clients and stakeholders “stay the course”, not panic and retreat back into the familiarity of formative and summative evaluation? Why is it so hard for many of us (myself included) to even understand what developmental evaluation is and do it, consistently and coherently?
There isn’t a single answer to that question (beyond “it depends”, the ultimate single answer to any question), but there’s one part of the answer that has been on my mind lately, and it speaks to the idea that “injustice is rooted in uncertainty”, a hypothesis that Chris offered on the podcast.
Here’s what I tweeted about it (slightly edited for clarity and links added for reference):
Here is my take: the conversation about complexity is inextricable from the one about justice and equity.
The point on which my whole practice has turned lately is the understanding that the aversion to complexity is not just because complexity is challenging, but because settler colonialism and white supremacist culture reject complexity. Complexity isn’t just “hard” inherently. It’s specifically hard for particular ways of being and acting. It’s hard to consolidate power in complexity, because it requires diversity and collaboration, acceptance of partial and multiple truths, and openness to ongoing change.
Applications of complexity thinking aren’t inherently just. A complexity-informed approach can be used to colonize better, exploit better, oppress better. As @jdeancoffey pointed out, complexity work must intentionally serve equity, not just be assumed to, “And I would add @MQuinnP that we need to do DE in pursuit of equity, liberation and justice. We have to be for something.”
And still I see a resonance in the obstruction and devaluation of the ways of knowing that serve in complexity and the epistemic injustice and epistemicide levied at Indigenous ways of knowing, which also tend to be relational and contextual. Insistence on certainty at a level inappropriate to context is both irrational and, when combined with institutionalized inequity capable of enacting and sustaining harm through violence or neglect on specific groups of people, unjust.
In more practical terms what this means for me as an evaluator is that I have a responsibility to be competent both in certainty and uncertainty, discerning when each is appropriate, and supporting the people I work with in navigating both. Creating certainty is not my job. AND that ongoing questions at all times in my work are, “Whose way of knowing is being centered here? Why? Is it named or presented as the default? In what contexts does that way of knowing operate?” These are @equitableeval questions, and I need them to evaluate in complexity.
I would add not only to evaluate in complexity but to understand complexity and the ways in which it may or may not be grounded in or be in service of equity, liberation and justice – probably need to add healing.
I also followed up with a response to myself that while it might be difficult to consolidate power in complexity, it’s not impossible to do so. Lots of bad things also operate in complexity. Racism, colonialism, exploitation capitalism, these are all things that have thrived and continued to thrive within the parameters of the complexity of human social organization. Complexity can be navigated to many different ends, hence the need for an explicit ethic or axiology of justice to inform where we are headed and why and how. As Jara said, we have to be for something.
Human beings aren’t innately good or bad, we’re just human. We’re self-determining and we exist in contexts that influence us and are influenced by us. If it were just a case of deciding we want to work in complexity and skilling up in how to do so, it wouldn’t be so hard. There’s a paradigm shift involved, but it’s doable. There are complementary new practices and techniques to learn, but they’re available. Any muscle takes time and effort to be strengthened, every craft can be practiced and honed. But it’s harder than it needs to be because we’re also reinforced in so many systemic ways not to work in complexity, every time there is a demand for certainty that’s disconnected with the actual context being operated in.
There are RFPs and funder expectations that equate to “buying success” because we only want to fund what is ‘guaranteed’ to work with a ‘proven track record’ (the language of certainty). The under-resourcing and over-burdening of the social sector where the demands for responsibility and accountability are not remotely matched with the necessary support, leading to fear and aversion to risk and being punished for not delivering exactly what is asked for. The ways that the epistemologies of certainty and their accompanying methods (e.g., randomized control trials) as accepted and positioned as more credible, valid, and valuable than epistemologies and methods of complexity and uncertainty (even in casual language like, “well ideally we’d have harder data on this, but for now this is what we’re seeing on the ground”—anyone obsessing over exponential charts lately knows that the most useful data is the data you have to work with and that numbers don’t bring certainty, just a different kind of perspective best complemented with other kinds of data, like stories about what other people and countries are doing to cope and manage in highly emergent circumstances). There’s a whole fantastic article from Tanya Beer about all the institutionalized impediments to working in complexity within evaluation itself and within funding organizations. On a grander scale, there’s straight-up capitalism (economic stability, remember?). And white supremacy culture. These are all barriers to just and equitable evaluation as much as to complex evaluation.
The argument here isn’t that complexity and complexity-informed approaches are superior or universally appropriate. I don’t want a complexity-informed approach to making a vaccine. I want the vaccine figured out according to the known good practices we have in that space because that’s a context where certainty, predictability, and established practices apply. And I want to know that we are prepared to have adaptive, responsive, uncertainty-appropriate practices for how we’re going to keep ourselves together as a species over the coming months, which is a very different situation to grapple with. And I want that adaptivity and responsivity to be consciously, explicitly, and fundamentally in service of justice and equity.
And for my fellow white and settler evaluators (and anyone else who wants to), for us to keep asking, “What and whose ways of knowing are being reflected in this evaluation work? Why? Whose values and worldviews are determining what ‘counts’ as valid knowledge? And appropriate ways of generating and using that knowledge?”
And for those of us operating in complexity, to be asking those questions as well, and also what we are navigating complexity in service of, and how does that show up in our work?
And for us to look to the work that has been and is being done particularly by people of colour and Indigenous evaluators working in community, in relationship, and in complexity (and who are persistently erased in the evaluation community, particularly when they are women, as has been thoroughly researched and described by Vidhya Shanker and referenced in this recent AEA365 post), not to invent or reinvent evaluating in complexity but to see how it is already being done, listen, and learn.