Considering politics of evidence

Today’s post is from Lanie Stockman, Outcomes and Evaluation Specialist at Good Shepherd Australia New Zealand, asking us to consider the power dynamics behind the ‘results mantra’ in social service provision.

Too long; didn’t read? 5 quick takeaways on the politics of evidence.

Have you heard of the acronym tl;dr? I hadn't until attending a recent meeting of social service evaluators. ‘Too long; didn’t read’ has become the ultimate symptom of people’s busy-ness and consumption of at times complex information in 140 characters or less, or better yet, in pictures.

In the non-profit sector, the ‘tl;dr’ mindset awkwardly nudges the drive to gather and use evidence to ‘continuously improve’ services and demonstrate results to funders. Quite rightly, public discourse about non-profits has shifted from one that questions administration costs to one that questions impact. See the media hubbub a few months ago about the Shane Warne Foundation, for example.

Much analysis has acknowledged that evidence in a broad sense is political in that often unseen and unacknowledged sites of power determine what evidence is valuable, how it is to be used and what is considered robust. By rejecting the perceived neutrality of evidence, we can more critically and selectively generate evidence that is reflective of client outcomes and use it to improve accountability to clients, influence funders and demonstrate impact in ways that lessen the drain on resource-scarce non-profits.

If you’re tempted to skip this post (tl;dr) – here are 5 short considerations regarding the politics of evidence in measuring social service outcomes:

1.     What cannot be counted still exists.

As with all research, monitoring and evaluation of social service outcomes is underpinned by questions of ‘for whom’, ‘by whom’ and ‘for what purpose’. We also need to question ‘what’ is valued. The privileging of ‘objective’ and ‘empirical’ evidence abounds in the sector, to the extent that experimental evaluation designs such as randomised control trials are considered to ‘take the guess work’ out of social interventions. Many acknowledge such experimental evaluation designs – whereby social change is measured scientifically as if a medical drug trial - are inappropriate unless the studies explicitly aim to answer questions around causality and attribution. The ethics around these evaluation designs are contested, particularly given they raise questions around access to interventions. What’s less recognised is the work of some feminist and anti-colonial scholars in rejecting epistemologies that value particular types of information – quantitative data for example - as logical and objective. According to Nicole Westmarland, privileging of quantitative data is underpinned by the assumption that “if the reliability, objectivity and validity ‘rules’ are followed ‘the truth’ will be discovered”. This points to issues of omission.  Scholar Elaine Coburn questions: what are we ignoring if we do not value “multiple, different, usually marginalized methodologies – Black feminist, crip*, queer and so on”?

2.     The ‘sales results approach’ cannot be applied wholesale to measuring social change.

As my colleague Susan Maury pointed out in this previous post, prevailing assumptions about private sector efficiencies are applied to social services provision. It is not uncommon for non-profits to be advised to model a private sector approach for measuring social change. However, what evidence would determine the success of a product in the market? What evidence would determine success of a family violence response, for example? These questions highlight the differing complexities for each sector in identifying and measuring results. While private sector efficiencies are the oft-touted benchmark, for-profit approaches to evidencing change cannot necessarily be applied to different sets of activities with completely distinct aims.

3.     Non-profit accountabilities are skewed to power holders: funders.

The current emphasis on upward accountability (that is, from non-profit to funder) likely skews non-profits’ efforts to satisfy funders - those in control of resources. As non-profits generally serve often voiceless communities and individuals who tend not to control resources**, arguably there is less onus on non-profits to be accountable to clients. Non-profits have a responsibility to provide clients with opportunities to understand and influence their work. After all, how can service delivery, social advocacy and policy work be effective if not grounded in the views and experiences of people we aim to support? 

4.     We’re in danger of reductionism in efforts to demonstrate results.

In addition to service delivery, the work of non-profits often involves bringing about behaviour/social/system/policy change. Yet with the impetus to demonstrate ‘results’, we are in danger of reductionism. To justify funders’ investments, organisational efforts are oriented to quantifying and communicating short-term results, rather than potentially fuzzier longer term changes. What’s more, this demand for upward accountability has spawned a sub-sector of for-profit enterprises that promise tools to measure social outcomes, yet these tend to avoid the complexities of measuring social change outcomes.

5.     The need for funding impedes being real about failure

There is an inherent yet often unspoken tension between funders wanting robust data about the impacts (distinct from marketing collateral) of social services and those same funders controlling the purse strings. So how genuine can non-profits be about learning from failure (at least publically)? While there are occasional meaningful and open examples of non-profit learning there may be a reluctance to report and better understand unintended and negative outcomes lest they reflect poorly on the work of an organisation and thus its ability to access funds to continue operating.

While these considerations may seem unfavourable – especially to funders and the results measurement industry - these present opportunities for collaboration in navigating the politics of evidence. Here are 3 ideas:

1.     Non-profits could support funders to understand the importance of privileging individuals’ lived experiences. While not as neat or perhaps as ‘empirical’ as numerical data sets,  techniques such as photovoice enable diverse groups of stakeholders to tell change stories. Good Shepherd, MacKillop Family Services and Jesuit Social Services used an interactive process known as digital storytelling for a research project called ‘I Just Want to go to School’ to allow out-of-home care school leavers to explain their experiences of schooling in their own words.

2.     Value and fund post-implementation evaluations, which seek to find out what happens long-term, sometimes years after programs end. These are crucial to understanding and evidencing what aspects of social interventions sustain and why.

3.     Don’t reinvent the wheel. Although policy contexts and demographic profiles change – much can be gained from all stakeholders mining their own organisational data as well as the documented good practice publically available and applying and sharing lessons relevant to current practice.

*Please note this term is used by Coburn, one that I have included in the quote, in an attempt to further the use of the reclaimed term. According Syracuse University Introductory Guide to Disability Language and Empowerment, some disability cultural groups have reclaimed negative terms like "crip".  Yet, the use of reclaimed terms is context-dependent, and it must be acknowledged that negative connotations remain outside of the communities that seek to reclaim them. 

** As Arundhati Roy has said “there's really no such thing as the 'voiceless'. There are only the deliberately silenced, or the preferably unheard”.