13 September 2014 ~ 0 Comments

Non-profit evaluation (and LEARNING!)

For this summer’s addition of the Stanford Social Innovation Review, my friend Karina Kloos and I wrote a piece about nonprofit evaluation called “Lost in Translation“.

This week I was at a wonderful residential with the wonderful Clore Social Leadership team, and I was reminded about the need for a middle ground of dialogue in nonprofit evaluation communication. The impacts of the reality of too many people doing little to no evaluation on how or where they give their money (the ice bucket challenge campaign, which I joined in as well, being one extreme of that) is countered by my fear of the other extreme: the quantitative data is king and all donation decisions should be made on quantitative comparisons.

A trend in assessment/evaluation conversations has moved towards Randomize Control Trials (RCTs) and the extreme antithesis of blind giving. I have seen a number of academics speak about this trend, many in Oxford, with powerpoints showing that XYZ intervention (say giving out water filters for free) has been proven to be more effective than ZYX intervention (say charging people for water filters). I think it’s AMAZING that these studies are being done, even more wonderful that they are being shared so people can learn, and absolutely fantastic that they are being combined and collectively learned from in things like the Cochrane Collaboration (which my brilliant friend and evaluation expert, Michael Cooke, told me about at our last Brainfood Discussions event). When I’ve asked the academics presenting data that “proves” that one way or working is more effective than the other, they usual can not answer my questions about the context of the evaluations “Were equal numbers of filters given away to both groups in the study? Was this rural or urban? Was education provided as well? How was water borne illness reduction tested? Were these areas where water filters were commonly used already?” Though I love that data is increasing becoming available and can be used effectively as a tool in the evaluation and decision making process, I worry when the data is talked about as a black and white answer to very complex problems. For instance, in the “giving water filters away” example above, there is so much more to the “how” that isn’t shown in the data. For instance, what happens when the nonprofits that are giving these filters away run out of money, or change priorities, or move to a new area? What might have been a successful project in the short term might look very different a few years later. Or, when repeated, could lack the educational component that made one intervention more successful than another, etc.

As I realized the piece Karina and I wrote is now behind the SSIR paywall, I thought I’d summarize some of our thoughts here. The impetus of our piece was a Stanford University study, which Karina cofounded, that examined the evaluation discourse online and identified 400 key influencers in the conversation about nonprofit evaluation. A very small number of those influencers were implementing nonprofit organizations. Additionally, the vocabulary and evaluation methodologies discussed were broken into three categories: managerial (the language of business, i.e. returns, investment), scientific (the language of science, i.e. randomized control trials, measurement, data), and associational (the traditional language of the social sector, i.e. mission, values, empowerment, justice) with the first two groups growing in their influence on the sector. It was clear from the study that the trends driving the current conversation around evaluation and the key influencers were no longer the nonprofits themselves.

Our piece concluded by exploring five ways we think nonprofits can take back control of the conversation around evaluation of their work, including:

Talk about purpose: “Our view is that all nonprofits should have a clearly defined theory for how they will create change that connects their strategies and programs to the results that they anticipate.” We contest that if nonprofits don’t clearly state their definition of “success” in their work, they then leave that to their funders to decide, and they might end up drifting from their initial mission by leaving that open for interpretation.

Talk about people: Though the data presented by the academics I discussed above is valid, the stories behinds the “hows” of the work were missing, as well as the lives and people being effected. We need to find a middle ground of using both. “Qualitative assessments that draw on conversations with people are often more consistent with how nonprofits operate, and they are also a methodologically valid form of evaluation.” 

Talk about the big picture: To often nonprofits get so focused on their own work (sometimes to the point of being competitive with other groups) that they overlook how their work fits into the wider ecosystem of change in which they are operating. “In the Stanford study, the influence of competitive, market-based thinking was evident in the prevalence of terms such as “value proposition” (used by 55 percent of entities in the sample) and “expected return” (45 percent).” Trying to focus on “impact” as the specific change from ONE organization is nearly impossible in very complex and changing ecosystems, and by focusing only on one organization’s change without seeking out what is needed across a more collaborative system can leave wide gaps, inconsistent or falsified interpretations of causation, and more importantly, less effective support towards any given mission.

Talk about challenges: Too often evaluations are conducted as a “requirement” by a funder, or to produce in an annual report to donors leading to overly positive reports. “Assessments shouldn’t be about proving if something worked or not, but rather understanding the context of successes as well as failures.” In this way, global nonprofits can work to share the truth about development work: it’s complex! Nonprofit “theories” of how to create change don’t always work, but through constant evaluation and shifting, we can improve…

Talk about learning: … and learn. A key concern we shared was that evaluation conducted solely for the sake of “others” (funders, etc) leads to a waste of valuable learning. If evaluations are not designed to be useful in helping to inform improvements for the projects and organizations to better achieve their mission in the future, than we’re also wasting valuable and scarce resources. We question the term “monitoring and evaluation” and suggest that perhaps “the more appropriate term is ‘learning and evaluation.’ In fact, the bottom-line question in any process of nonprofit evaluation should be, ‘What are we learning from this evaluation and how can that be used to help improve our collective work?'”

I’d love to hear your thoughts – drop me a note or reach out via this blog!