“Good” evidence, “bad” evidence: Who decides? Reflections on data, philanthropy, and power

Could it be that most of the evaluations that funders ask nonprofits to conduct are not worthwhile? This was one of the questions explored at the Philanthropic Foundations Canada conference last week. The biennial conference brings together funders from across the country to discuss changes and trends in Canadian philanthropy—and Thursday’s morning plenary panel focused on Powered by Data’s favourite topic: the possibilities and pitfalls of evidence-based philanthropy.  For the panel, Caroline Fiennes, Founder of Giving Evidence, was joined by Dr. Janet Smylie, a Cree-Métis physician, health researcher, and Director of the Well Living House and Katharine Bambrick, CEO of the Ontario Trillium Foundation.

Most non-profit staff are not trained researchers—but are expected by funders to produce research

Caroline Fiennes opened the discussion by illustrating how nonprofit programs can sometimes have the illusion of impact when correlation is confused with causation. For instance, it’s common for nonprofits to evaluate a particular outcome of interest—say, reading level—pre- and post- intervention, without evaluating a matched control group. This means that natural changes over time (e.g. a child becoming better at reading, simply because they are getting older) can sometimes be mistaken for an intervention impact (e.g. a child becoming better at reading because of an after-school program). It’s a common novice’s mistake—Statistics 101.

Fiennes discussed how effective philanthropy requires some degree of literacy around research methods. After all, the monitoring and evaluation processes that are required from most grantees are, ultimately, research. Despite the importance of good methodology, most nonprofit service providers are not trained as researchers, nor should they be expected to. Fiennes pointed out the consequences to this lack of research capacity amongst nonprofits. Correlation is confused for causation. Sample sizes are too small to determine whether an effect is significant. Evaluations can be biased. And so, she concludes that most of the evaluations that nonprofits conduct are garbage.

Fiennes suggests that nonprofits should leave the often challenging and resource-intensive work of measuring impact to researchers. Rather than being asked by funders to conduct evaluations on efficacy, nonprofits should instead look to existing meta-analyses and reviews in the academic literature to inform their interventions. Philanthropy, as a sector, should work towards developing a science on what constitutes effective giving.

“Not everything that can be counted counts, and not everything that counts can be counted”: The social construction of counting

But, who ultimately has the power to decide what is considered effective?  As part of the plenary panel, Dr. Janet Smylie challenged the audience to think critically about the social contexts in which evidence is produced in the first place, and to consider the ways in which data production can be culturally biased. Although measurement may seem like an objective process, she discussed how, in the context of Indigenous communities, the indices that are most straightforward to measure are often not the ones that will reflect the root causes of inequities. Referring back to Fiennes’ earlier example of “failed” literacy programs, she considered the possibility that a “failed” local program may have had a positive impact on other measures, ones that may not necessarily be considered important in the eyes of funders, but that may be important to local communities.

Even in fields such as evidence-based medicine, Dr. Smylie discussed how timeframes for measuring impact and outcomes are sometimes arbitrary in the longer history of systemic inequities. For instance, a grant to address Indigenous health may span one to five years: relatively insignificant in the broader context of Canada’s centuries-long history of colonialism that gave rise to the health disparities that many Indigenous communities face. In a follow-up session on data for impact, Dr. Smylie elaborated further on the risks of relying only on data without considering social contexts: even “quality” research drawn from government datasets can invisibilize, exclude, or undercount certain populations that aren’t adequately captured in government records or underestimate the severity of inequities.

Reflections on data, power, and community autonomy

Returning from the conference, we are reflecting on how the ideas from this panel, as well as their tensions, fit within the context of our organization. Data and evidence are things that excite all of us at Powered by Data: our work is grounded in the belief that the right datasets could help transform the social and philanthropic sectors in a positive way. We are confident that data could enable foundations to identify important gaps in funding; help service providers better understand the outcomes of their users; inform how policy decisions are made; and provide evidence to support advocacy around social inequities. We have found value in frameworks like Caroline Fiennes’, which highlight the need for statistical rigour in growing conversations around the role of data in effective philanthropy.   

We also recognize the limits posed by a singular approach to the concept of evidence. Dr. Smylie’s talk was a cautionary reminder for ourselves, and for the sector more broadly, that focusing purely on empiricism and a “science” of philanthropy runs the danger of obscuring different ways of knowing, and ignoring community-based knowledge in the marginalized demographics that philanthropy often purports to serve. Philanthropy is a sector operating on a concentration of wealth and whiteness; research indicates that the proportion of people of colour in foundation leadership positions is particularly low. We cannot talk about data-based decision-making in the sector without talking about asymmetries of power between grantmakers, grantees, and beneficiaries—without acknowledging histories of marginalization amongst communities affected by funder decision-making.

Our team has discussed how an over-reliance on data in philanthropy runs the risk of increasing top down decision-making from funders and restricting the autonomy of nonprofits and community-based groups. Our long-term vision is not a sector in which service providers are prevented from making decisions based on community knowledge and relationships.  In our ongoing policy work around increasing government administrative data use, we are doing our best to move forward thoughtfully and deliberately to include a diversity of perspectives. We are collaborating with over thirty service providers and advocacy groups to ensure that conversations around power, community needs, and community knowledge are integrated into these policy conversations. We’re also excited that Dr. Smylie will be helping to advise on issues related to Indigenous data governance for the initiative.

At the end of the day, we see data as a valuable tool for the sector. But like all tools, the social contexts in which they’re used, applied, and interpreted matters. We’re grateful that PFC made space for considering the complexity that comes with evidence-based decision philanthropy at their conference.

Lorraine Chuen