Like any campaigning organization, Oxfam has limited funds, and so needs to know whether its investment has paid off. The push from everyone and their dog to pursue a ‘results agenda’ and ‘value for money’ has added further momentum to that effort. That’s fine if you’re doing something that’s easy to measure, (say vaccinating kids, or cash transfers), and where attributing an effect to a particular cause is relatively straightforward, even if sometimes technical and expensive to establish. But what about influencing government policy, where there are dozens of voices, numerous events, and establishing any causal chain is both elusive and (inevitably) disputed (did anyone else grind their teeth watching Bono and Bob making poverty history the other night………?)
This matters because Oxfam increasingly sees a big part of its role as working with others to influence government policy, especially in developing countries, through programmes, partnerships and advocacy.
I got involved in a brain-bending conversation about this when trying to help out with a ‘killer fact’ on some smart campaigning by our team in the Philippines. At first glance, the success of the campaign for a ‘People’s Survival Fund’ was ideally suited to the task. Oxfam and partner iCSC (Institute for Climate and Sustainable Cities) commissioned research, and then launched a campaign in July 2010 calling on the government to set up a climate change adaptation fund. We did all the usual stuff – backgrounders for policy makers, popular mobilization, media work, celeb endorsements etc and (voila!) a US $25m a year People’s Survival Fund (PSF) was passed by the Philippine Congress in June 2012 after a two-year campaign. Result!
But was it value for money? At first glance it seems pretty easy to calculate the return on the money invested in the campaign – it’s just how much cash reaches poor people over a period of time, compared to the amount Oxfam spent on the campaign, corrected to take into account the fact that Oxfam wasn’t the only organization campaigning on the issue, and so shouldn’t take all the credit.
In mathematical terms, it’s even easier: Return to Campaign (RtC) = (AxBxC/D)
A = The total new expenditure on climate change adaptation resulting from the PSF.
B= the proportion of that money that reaches poor people.
C = plausible % of attribution to the Oxfam campaign
D = Oxfam’s expenditure
We calculate the value for A, B, C and D as follows
A: P$1bn a year, taken over say a five year period, making it P$5bn (about US$125m).
B: If the money is equally distributed among all the people in the areas receiving PSF funds, some 45% would go to poor people (based on the 30-60% poverty rates in the relevant areas). But experience suggests that richer people may be more likely to get their hands on the cash. As we are looking for a conservative estimate here, we therefore assume that only 20% of the money would go to poor people
C: As the main funder, and lead agency in the lobby effort that led to PSF, it seems reasonable to take half the credit for the victory, so D = 0.5
D: Oxfam’s total expenditure over the three years of the campaign comes to P$7.4m
So using $Pm as the unit of calculation
Return to Campaign = (5000 x 0.2 x 0.5)/7.4 = 68
i.e. over a 5 year period, Oxfam’s campaign generated at least 68 times more resources for climate change adaptation than we invested in the campaign/for every $1 we spent we generated $60 for climate change adaptation for poor people.
Enter the nagging self doubt (otherwise known as Claire Hutchings in our monitoring and evaluation team). Every single one of those terms can be challenged:
A: assumes all the budget is disbursed and that none gets eaten up by overheads – any underspend or overhead costs would obviously reduce the amount available to reach poor people.
B: how do we know if that is a reasonable estimate of the proportion of the PSF that will ultimately reach poor people?
D: but what about all the other money Oxfam has spent globally and within the Philippines on raising awareness of climate change, supporting partners etc – didn’t that play a role in the victory? What about cost of programming we’ve done in the Phillipines and other countries that have contributed to building the Oxfam brand, enabling us to ‘sit at the table’, participate in these conversations, influence etc.
And then we get to C: let’s assume for a moment that we can get an accurate costing of all the resources Oxfam has spent national and globally that have contributed to getting this issue on the agenda in the Philippines, and can reach a credible estimate of the proportion of PSF that will reach poor people. The question remains how can we credibly attribute a % of any decision to the influence of the campaign?
For example, suppose years of global and national campaigns, by Oxfam and others, had got the issue to a tipping point, where only a small nudge was needed to persuade the government. Should the credit go to the patient slog of a multitude of actors, or the last minute glory-grabbing campaign (back to Bono and Bob)? A light touch approach might be to ask people – staff, partners, government officials and perhaps most importantly, independent experts – to give us an estimate. But such questions risk being pretty leading (‘please attribute a percentage of attribution to the campaign’ is likely to get an inflated estimate), and open to bias. But doing something more rigorous, to investigate the main factors that contributed to the Parliament’s decision, would be expensive and still may not find the evidence needed to reach credible conclusions. Now there’s a whole measurement challenge around evaluating campaigns and advocacy efforts, and through our Effectiveness Reviews we’re investing in trialing and refining an impact assessment approach for this work, one that builds from process tracing, to explore what it takes to reach credible conclusions about the contributions of our work to policy change (watch this space).
Let’s assume (for the moment) that such evaluations would allow us to credibly attribute our influence. The fact is that these evaluations take time and resources. Do we really need to commission an evaluation any time we want to talk about the resources that are being leveraged through our campaign work? Or can we identify a rule of thumb, with all the necessary caveats and qualifications, that’s ‘good enough’, at least for cases that seem pretty clear cut.
What would be good enough in this case? Your thoughts please