Interns; World Aids Day and family planning; local music; environmental melancholia; Faces of Latin America; research into evidence via pictures: links I liked

December 5, 2012

Will this time be different? What hope for Gaza?

December 5, 2012

How do we work out the returns to campaigning? Nice example from the Philippines

December 5, 2012
empty image
empty image

Like any campaigning organization, Oxfam has limited funds, and so needs to know whether its investment has paid off. The push fromPSF event-postcard-blog11 everyone and their dog to pursue a ‘results agenda’ and ‘value for money’ has added further momentum to that effort. That’s fine if you’re doing something that’s easy to measure, (say vaccinating kids, or cash transfers), and where attributing an effect to a particular cause is relatively straightforward, even if sometimes technical and expensive to establish. But what about influencing government policy, where there are dozens of voices, numerous events, and establishing any causal chain is both elusive and (inevitably) disputed (did anyone else grind their teeth watching Bono and Bob making poverty history the other night………?)

This matters because Oxfam increasingly sees a big part of its role as working with others to influence government policy, especially in developing countries, through programmes, partnerships and advocacy.

I got involved in a brain-bending conversation about this when trying to help out with a ‘killer fact’ on some smart campaigning by our team in the Philippines. At first glance, the success of the campaign for a ‘People’s Survival Fund’ was ideally suited to the task. Oxfam and partner iCSC (Institute for Climate and Sustainable Cities) commissioned research, and then launched a campaign in July 2010 calling on the government to set up a climate change adaptation fund. We did all the usual stuff – backgrounders for policy makers, popular mobilization, media work, celeb endorsements etc and (voila!) a US $25m a year People’s Survival Fund (PSF) was passed by the Philippine Congress in June 2012 after a two-year campaign. Result!

But was it value for money? At first glance it seems pretty easy to calculate the return on the money invested in the campaign – it’s just how much cash reaches poor people over a period of time, compared to the amount Oxfam spent on the campaign, corrected to take into account the fact that Oxfam wasn’t the only organization campaigning on the issue, and so shouldn’t take all the credit.

In mathematical terms, it’s even easier: Return to Campaign (RtC) = (AxBxC/D)

Where

A = The total new expenditure on climate change adaptation resulting from the PSF.

B= the proportion of that money that reaches poor people.

C = plausible % of attribution to the Oxfam campaign

D = Oxfam’s expenditure

We calculate the value for A, B, C and D as follows

A: P$1bn a year, taken over say a five year period, making it P$5bn (about US$125m).

B: If the money is equally distributed among all the people in the areas receiving PSF funds, some 45% would go to poor people (based on the 30-60% poverty rates in the relevant areas). But experience suggests that richer people may be more likely to get their hands on the cash. As we are looking for a conservative estimate here, we therefore assume that only 20% of the money would go to poor people

C: As the main funder, and lead agency in the lobby effort that led to PSF, it seems reasonable to take half the credit for the victory, so D = 0.5

D: Oxfam’s total expenditure over the three years of the campaign comes to P$7.4m

So using $Pm as the unit of calculation

Return to Campaign = (5000 x 0.2 x 0.5)/7.4 = 68

i.e. over a 5 year period, Oxfam’s campaign generated at least 68 times more resources for climate change adaptation than we invested in the campaign/for every $1 we spent we generated $60 for climate change adaptation for poor people.

Enter the nagging self doubt (otherwise known as Claire Hutchings in our monitoring and evaluation team). Every single one of those terms can be challenged:

A: assumes all the budget is disbursed and that none gets eaten up by overheads – any underspend or overhead costs would obviously reduce the amount available to reach poor people.

abnormal weather, PhilippinesB: how do we know if that is a reasonable estimate of the proportion of the PSF that will ultimately reach poor people?

D: but what about all the other money Oxfam has spent globally and within the Philippines on raising awareness of climate change, supporting partners etc – didn’t that play a role in the victory?  What about cost of programming we’ve done in the Phillipines and other countries that have contributed to building the Oxfam brand, enabling us to ‘sit at the table’, participate in these conversations, influence etc.

And then we get to C: let’s assume for a moment that we can get an accurate costing of all the resources Oxfam has spent national and globally that have contributed to getting this issue on the agenda in the Philippines, and can reach a credible estimate of the proportion of PSF that will reach poor people.  The question remains how can we credibly attribute a % of any decision to the influence of the campaign?

For example, suppose years of global and national campaigns, by Oxfam and others, had got the issue to a tipping point, where only a small nudge was needed to persuade the government. Should the credit go to the patient slog of a multitude of actors, or the last minute glory-grabbing campaign (back to Bono and Bob)?  A light touch approach might be to ask people – staff, partners, government officials and perhaps most importantly, independent experts – to give us an estimate. But such questions risk being pretty leading (‘please attribute a percentage of attribution to the campaign’ is likely to get an inflated estimate), and open to bias. But doing something more rigorous, to investigate the main factors that contributed to the Parliament’s decision, would be expensive and still may not find the evidence needed to reach credible conclusions.  Now there’s a whole measurement challenge around evaluating campaigns and advocacy efforts, and through our Effectiveness Reviews we’re investing in trialing and refining an impact assessment approach for this work, one that builds from process tracing, to explore what it takes to reach credible conclusions about the contributions of our work to policy change (watch this space).

Let’s assume (for the moment) that such evaluations would allow us to credibly attribute our influence.  The fact is that these evaluations take time and resources.  Do we really need to commission an evaluation any time we want to talk about the resources that are being leveraged through our campaign work?  Or can we identify a rule of thumb, with all the necessary caveats and qualifications, that’s ‘good enough’, at least for cases that seem pretty clear cut.

What would be good enough in this case? Your thoughts please

8 comments

  1. B: I would also calculate with the fact, that the poor people may get more money on adapting for the climate change but less of other kinds of support. Because its important to know where the money for PSF are taken. Most probably they will be missing somewhere else.

    But of course I know its not possible to take in account all the factors :-)

  2. Great topic to look at Duncan – it’s one I’ve been grappling with for a while. Two things spring to mind:

    1. What happens when your campaigning win doesn’t lead to a commitment of funds? In the campaigning team I am part of we often campaign for policy change that does not involve a financial commitment. For example, we have been campaigning for a Groceries Code Adjudicator to reign in supermarkets for a long time. More recently we have been pushing for this adjudicator to have the power to fine supermarkets to help hold them to account for how they treat suppliers. This was announced yesterday, so is a great campaigning win and could really make a difference to the lives of the poor. But assuming we could produce a figure for all the other variables in your equation, we would have no way of identifying A (or consequently B).

    2. I think this is a really good way of starting to identify VFM of campaigning work. However, it is, as you yourself point out, imperfect. My worry is that the more we try to show these things, the more they will become expected by donors. And that scares me! I am imagining being grilled about why our return was only 50 rather than the 72 we promised in a funding proposal… So I wonder if we should be more honest and admit that it cannot be done (at least not in any scientific, quantifiable sense). But that does not mean that campaigning is not worth doing – far from it. On the other hand, it might be a good idea for us to start first to try and come up with something more viable than would be pushed upon us by others…

  3. I suppose it depends who your audience is, but I’d say that this kind of simple back-of-an-envelope analysis is really interesting and useful, as long as you are clear and upfront about your assumptions (although a health warning – its these kind of challenges which led economists down the path of homo economicus – (over?)simplifying assumptions to try and make sense of complexity).

    I’d be most worried about Jana’s point – this isn’t free money that the government is committing, but it’s coming from somewhere else – so you really need to think somehow about where that money might have come from. Good luck with that!

  4. I tend to hew more to Lee/the third commenter’s caution. There is a complexity issue here that the metrics used will miss – or is missing – entirely. Citing the Homo Economicus example was spot on, because the variations to ‘campaigning’ – with regard to how economists measure quality of life with their limited tools – are discernible.

    I recognize the exercise’s value – a determination of the impact of campaign investments. If only for the provocation that the initial figures will generate, as an exercise it is really welcome and, I suspect, will even often be necessary. For sure, as a collective discussion with partners, it will be useful, since it tends to leverage ideas relating to things that have not been measured by the methodology. However, if it is used as a major, if not the major, yardstick to assess a campaign’s success – and Duncan’s piece did not elaborate on whether or not other tools will be used with the Returns to Campaign equation, and how – on its own I think it will produce odd conclusions, or it might even lead support astray, since, from an incredibly complex set of variables, it might pare – not distill, but whittle – details down to a handful of recognizable but limiting insights. One example – if the Returns measurement has been used regularly over the years – try to total all Oxfam Campaign Returns in the last three years. It might produce strange figures that may, at best, create a Mona Lisa painting-by-the-numbers picture that approximates the original image in a highly pixelized manner, as opposed to a photo of an actual Hieronymus Bosch, which is chaotic but also blazing with unruly ideas.

    Anyway, I’ve read this only once, so maybe there will be a few more ideas after a second reading.

  5. You need to include the bloated boasts co-efficient. For a campaigning organisation that co-efficient is pretty high. It is usually anywhere between 1/100 and 1/1000. Let us be generous and put the bloated boasts at the lowest end of 1/100. Therefore Return to Campaign = [(5000 x 0.2 x 0.5)/7.4]/100 = 0.68.

  6. The general question of the ROI of lobbying, at least, is pertinent far beyond just the aid community, and those with far larger sums of money at stake have interest in the answer to this question as well. Based on the amount that large corporations with potential benefit/harm from the government, invest in lobbying certainly seems to suggest that collective market intelligence’s answer is that returns are good. Also, campaigning and lobbying is almost certainly subject to a strong scale of diminishing returns, indicating that marginal ROI of the typical small-budget aid initiatives is likely to be higher than that of the bigger budget lobbying efforts of pharmaceuticals, defense contractors, and energy corps, which still appear positive.
    And here is a paper that also finds evidence that lobbying ROI is very high:
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1375082
    It seems like this information would apply to aid campaigns.

  7. Duncan: Interesting post and good question that we are also grappling with at CGD. We are piloting a version of expected return to see what makes sense and how we might use this kind of exercise, both for management and communications. The pilot period isn’t yet over, so I don’t want to prejudge where we’ll come out, but my quick reactions from our experience so far are: (a) it’s generated a constructive internal debate already (b) the main value is likely to be in the process of thinking through the steps of research to influence to outcome, rather than the accuracy of any calculations, (c) the common objections to the accuracy of the specific numbers are probably insurmountable, but there may be benefit from using a common process (e.g., we may never really believe that Oxfam or CGD “deserve” 10% credit for some outcome, but if the questions used to come up with that figure are clear and regular, then we could have confidence that Oxfam or CGD’s role in one project with a 25% attribution is different than another with 10%), and (d) yes, I also worry that there is a risk that funders will try to use some process like this to judge projects in a way that will be problematic and skew incentives for researchers to be open and honest, but I think the thoughtful ones are fully aware of the shortcomings. I look forward to seeing where Oxfam eventually comes out.

  8. Great set of comments – thanks to everyone who contributed. The overarching message I take away is to be clear on what such an exercise does/doesn’t do, and be prepared to defend that position against metrics fetishists.

    What it does:
    – gives a fairly convincing account of just how good an investment campaigning can be. As with venture capitalism, even if the majority of such campaigns fail, you still emerge with real gains overall
    – raise lots of useful questions about how we design and monitor campaigns (as Todd suggests)

    What it doesn’t do:
    – Provide a new metric for assessing all, or even most, campaigns, which do not end up with a financial result, and take place in even messier, complex systems where establishing attribution is impossible
    – replace the need for a much bigger exercise to ‘count what counts’ in the sense of developing more rigorous qualitative tools to measure impact on the distribution and renegotiation of social, political and economic power.

    Thanks again, everyone, and I’ll keep you posted as this work progresses

Leave a comment