We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time
Why not come back in a decade?
bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?
Everyone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking. What about something more rigorous: how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:
One option would be to do something like our own Effectiveness Reviews, but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.
There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.
As always with some MEL question that makes my head hurt, I turned to Oxfam guru Claire Hutchings. She reckons that the practical difficulties are even greater than I’d imagined: ‘even just understanding what the project actually did becomes challenging in closed projects. Staff and partners that worked on the project are usually funded by the restricted funding. Once that ends, they move on, and without them it is incredibly difficult to get at what really happened, to move beyond what the documentation suggests that they would do or did.’
In the long run, not all aid projects are dead
And Claire thinks trying to look for attribution is probably a wild goose chase: ‘I do think we need to manage/ possibly alter expectations about what we can realistically consider through an evaluation once a project has been closed for some time. Trying, for example, to understand the change process itself, and the various contributing factors (of which the project may be one), rather than focusing on understanding/ evidencing/ evaluating a particular project’s contribution to a part of the change process.’
My colleague John Magrath, who acts as Oxfam’s unofficial institutional memory, reckons one answer is to make more use of historians and anthropologists, who are comfortable with a long-term perspective. He also points out that the level of volatility looks very different from a donor perspective to that of a local organization – donors come and go, but local organizations are often much more stable over time, so partly (echoing Claire’s point), we should just try and understand the origins of their success, and not worry so much about taking some sort of credit for it.
Tim Harford argues in Adapt that successes usually rise phoenix-like from the wreckage of earlier failures, and that applies to the aid business too. How do we know when people have acquired skills in some long defunct and abandoned project and then applied them somewhere else with huge success?
Or the same project can move from initial failure to subsequent success, as Chris Blattman recently wrote about in a US social programme. If your MEL stops with the project, you may never even find out about that eventual success.
The point of doing all this would be to explore how the focus or design of particular projects affects their long-term impact, and so shape the design of the next generation. Any foundations out there interested in taking up the challenge?
The good news is that others are already thinking along these lines – I just got this from VSO’s Janet Clark:
‘As a starting point we have commissioned an evaluation one year after we have closed down all our in–country programmes in Sri Lanka. Although this is only a small step in the direction of understanding sustainability we are also looking at the feasibility of assessing impact over a longer period across other programmes.
The evaluation questions centred around how local partners defined capacity and what contribution VSO volunteers made to
But how do you prove it?
developing capacity, alternative explanations for changes in organisational capacity, unanticipated consequences of capacity development and to what extent capacity development gains have been sustained. There was also an exploration of how change happened such as what are the key factors in whether or not capacity development was initially successful and subsequently sustained and what is uniquely and demonstrably effective about capacity development through the placement of international volunteers.
After one year of closing our country office there were some practical logistical challenges of carrying out an evaluation – we received valuable support from former staff but they were busy with new jobs and projects. Former partners sometimes struggled to remember the details of some of the interventions that dated back ten years and in some instances key partner staff had moved on. These factors meant that the evidence trail was sometimes weak or lost but this was not always the case.
We are now trying to identify good practice in this area so I would be very interested in hearing of others experience.’
Over to you
Update: a lot of great ideas and links in the comments section – make sure you read them