Oh dear. Another unreadable European Report on Development. Good stats on finance (FFD) though.

May 21, 2015

Is ‘give them land rights’ enough? Taking the temperature of the global land debate

May 21, 2015

What do we know about the long-term legacy of aid programmes? Very little, so why not go and find out?

May 21, 2015
empty image
empty image

We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time

Why not come back in a decade?

Why not come back in a decade?

bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?

Everyone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking.  What about something more rigorous:  how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:

One option would be to do something like our own Effectiveness Reviews,  but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.

There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.

As always with some MEL question that makes my head hurt, I turned to Oxfam guru Claire Hutchings. She reckons that the practical difficulties are even greater than I’d imagined: ‘even just understanding what the project actually did becomes challenging in closed projects.  Staff and partners that worked on the project are usually funded by the restricted funding.  Once that ends, they move on, and without them it is incredibly difficult to get at what really happened, to move beyond what the documentation suggests that they would do or did.’

In the long run, not all aid projects are dead

In the long run, not all aid projects are dead

And Claire thinks trying to look for attribution is probably a wild goose chase: ‘I do think we need to manage/ possibly alter expectations about what we can realistically consider through an evaluation once a project has been closed for some time.  Trying, for example, to understand the change process itself, and the various contributing factors (of which the project may be one), rather than focusing on understanding/ evidencing/ evaluating a particular project’s contribution to a part of the change process.’

My colleague John Magrath, who acts as Oxfam’s unofficial institutional memory, reckons one answer is to make more use of historians and anthropologists, who are comfortable with a long-term  perspective. He also points out that the level of volatility looks very different from a donor perspective to that of a local organization – donors come and go, but local organizations are often much more stable over time, so partly (echoing Claire’s point), we should just try and understand the origins of their success, and not worry so much about taking some sort of credit for it.

Tim Harford argues in Adapt that successes usually rise phoenix-like from the wreckage of earlier failures, and that applies to the aid business too. How do we know when people have acquired skills in some long defunct and abandoned project and then applied them somewhere else with huge success?

Or the same project can move from initial failure to subsequent success, as Chris Blattman recently wrote about in a US social programme. If your MEL stops with the project, you may never even find out about that eventual success.

The point of doing all this would be to explore how the focus or design of particular projects affects their long-term impact, and so shape the design of the next generation. Any foundations out there interested in taking up the challenge?

The good news is that others are already thinking along these lines – I just got this from VSO’s Janet Clark:

‘As a starting point we have commissioned an evaluation one year after we have closed down all our in–country programmes in Sri Lanka. Although this is only a small step in the direction of understanding sustainability we are also looking at the feasibility of assessing impact over a longer period across other programmes.

The evaluation questions centred around how local partners defined capacity and what contribution VSO volunteers made to

But how do you prove it?

But how do you prove it?

developing capacity, alternative explanations for changes in organisational capacity, unanticipated consequences of capacity development and to what extent capacity development gains have been sustained. There was also an exploration of how change happened such as what are the key factors in whether or not capacity development was initially successful and subsequently sustained and what is uniquely and demonstrably effective about capacity development through the placement of international volunteers.

After one year of closing our country office there were some practical logistical challenges of carrying out an evaluation – we received valuable support from former staff but they were busy with new jobs and projects. Former partners sometimes struggled to remember the details of some of the interventions that dated back ten years and in some instances key partner staff had moved on. These factors meant that the evidence trail was sometimes weak or lost but this was not always the case.

We are now trying to identify good practice in this area so I would be very interested in hearing of others experience.’

Over to you

Update: a lot of great ideas and links in the comments section – make sure you read them

30 comments

  1. Another option is time- and location-bound evaluations of all development activities. Dessalegn Rahmato has published a short report / book of this type in Ethiopia (reference below). Unfortunately the text is difficult to acquire outside of Ethiopia (Duncan I can send you a copy if you don’t have one). Although that text isn’t as evidence-driven as one might want, it provides an interesting model for inquiry. Dessalegn does an excellent job bringing in the political context to development projects that he explores during a 40-odd year period.

    Rahmato, D. 2007. Development Interventions in Wollaita, 1960s-2000s: A Critical Review. Forum for Social Studies, Monograph No. 4: Addis Ababa.

    In the same area of Ethiopia, Concern is working with a university partner to assess the impact of 15 years of interventions, having shifted their activities to new priorities areas, including comparing similar areas that were not targeted in that period. Still underway, I am told they hope to finish this year.

    1. Yes, that would be interesting – take a heavily aided province in any given country, and try and work out which aid programmes have had impact, and how. Anyone done that?

      1. I think that was the PaDev approach? http://www.padev.nl/index.htm
        “Instead of looking at the interventions of only one external actor, the PADev method first studies changes in a region over a specified period, and then tries to find out which interventions contributed to which changes.”

  2. Duncan,

    In my mind a plausible answer to your question lies in this observation:

    “After one year of closing our country office there were some practical logistical challenges of carrying out an evaluation – we received valuable support from former staff but they were busy with new jobs and projects. Former partners sometimes struggled to remember the details of some of the interventions that dated back ten years and in some instances key partner staff had moved on. These factors meant that the evidence trail was sometimes weak or lost but this was not always the case.”

    The answer is to change how impact evaluation is done so that it is not an after thought but an integral part of implementation. The second point is to dispense with the large experimental designs which fail to address flux and the complex ways in which people live there lives. Rigor is not the issue here but the full range of what is being investigated. For example, I have in the past talked of how a community chose dealing with a witchcraft concern before clean water. Accordingly, I would favor small objective investigations that focused on particular issues and how they would influence outcomes. What I know is that often projects fail even before they start – it just takes one terrible start-up meeting to jinx the whole effort!

    1. totally agree Cornelius – real time evaluations and course corrections are the way forward. This idea is an addition rather than an alternative to that.

  3. Utilizing the DAC defintion of impact as long-run effects the World Bank’s Operations Evaluation Department (OED, now IEG) produced several Impact Evaluation Reports which went back 5-10 years after project completion. Many of these studies had comparison groups, though not identified in a way to necessarily avoid selection bias.

    Some of the reports are mentioned in my short review available here https://ideas.repec.org/p/pra/mprapa/1111.html. Others may be available in World Bank Documents and Reports.

  4. I look at participatory projects implemented from the eighties to the present in the KP province of Pakistan which had huge bilateral and multilateral funding. Very little of the institutional memory resides in the government which was the main partner. All the people involved in design and implementation both in the government and donors have long gone. Indeed one donor was presenting a model project based on an evaluation in a seminar little realising that their next project had dropped all the good points they were mentioning in the next project they designed. But what is interesting is that the knowledge and memory continues to live on in the local institutions that had partnered in the project because these organisations have an area based approach and continue to work in the area in which the original projects were located. It would be interesting to see what has lived on and the impact of each project because the knowledge exists and can be collected with great ease

    1. Thanks Masood. ‘All the people involved in design and implementation both in the government and donors have long gone’ – where have they gone to?

  5. Actually, the issue of project staff moving on is itself a crucial part of the evaluation I think. The (I hate this term, but here goes) capacity building of local staff who work on development projects is a massive impact in many cases. Many of the young people with disabilities I have hired in INGOs have gone on to work in private sector companies, government, and even started their own local NGOs – the impact of these young people is huge. Local experts who receive quality training and mentoring through working on aid projects and then go and work inside government Bureau’s of Statistics, or Finance departments, etc. Even if the actual project approach isn’t adopted and continued, the ability of these people to continue to work on issues, build in what they’ve learned and do what works on a larger scale shouldn’t be under-estimated.

    1. Thanks Caitlin, but if our long term impact is through acting as a substitute vocational training provider, what changes to our business model would that imply. If that was your aim, would you come up with the current project/donor/implementer system?

      1. Very true Duncan! Of course this can’t be our only aim, but maybe something to build into impact evaluation – unintended, but nonetheless important impacts.

        I agree with Ross Clarke below – ditch the project. In fact, doesn’t World Vision have a model of 10 year commitment and investment into specific poor communities. I wonder do they do follow up evaluation a period after the investment ends?

  6. Solution – ditch the project, work within longer term time horizons, and track impact over a period we know it takes meaningful social change to occur. If only…

  7. While I totally agree with all the daunting practical and methodological challenges that you/ Claire talk about, would it be overly cynical of me to ask whether the challenge is also related to how much our sector really cares about this? There’s no doubt about the personal and professional commitment of many staff to learning, and when that is backed up by unrestricted or flexible funding you can get great examples of efforts to look at this (Plan’s post-intervention studies, for example). But if what our donors – and often internal leaders – hold us to account for are just short-term and often tangible results, and there’s no particular reward or punishment for doing long-term good or bad, it seems these long-term studies will be rare exceptions to standard practice. As you say, though, foundations or other sorts of donors with more freedom to consider the long-term should be providing leadership on this, and hats-off to those pioneers who just do it because it is right!

    1. Thanks Michael, it feels like there’s some kind of research gap here, between the hive of activity around short term project evaluation, driven by aid funding, and longer term ‘did aid help South Korea’ kinds of questions, driven by academic research funding. Would you agree that it’s the bit in between that is missing?

  8. Duncan this is an important issue. Some thoughts
    a) It is very helpful to have well curated information about context, background, baselines (if they exist), past evaluations etc AND details of ‘project alumni’ that is held in the public domain, for a good example see Rick Davies Poverty Alleviation in Ha Tinh collection http://www.mande.co.uk/htpap/hatinh.htm,
    b) It may well make sense as suggested to look at collective impact, and arguably at unintended consequences. This case study of donor impact on political change at the grassroots in Vu Quang District, of Ha Tinh Province, Viet Nam is an interesting example, and also part of Rick’s collection http://www.mande.co.uk/htpap/docs/Hopkinsreport.pdf
    c) Retrospective methods can,for some variables, be helpful if done well, http://www.ncbi.nlm.nih.gov/pubmed/9351141 and process tracing as done by political scientists http://polisci.berkeley.edu/sites/default/files/people/u3827/Understanding%20Process%20Tracing.pdf often looks at long time scales and the relationship between particular events and contextual change
    d) As Clare suggests one needs to probably think in terms of reducing uncertainty about making a plausible claim about sustainable impacts, rather than proving them being the result of a particular intervention. Given, as you say, that we know so little reducing our lack of knowledge is a useful step in the right direction!
    e) Finally it might be more useful to take a positive deviance approach and work back from an outlier or particularly positive outcome (e.g. decline HIV/AIDS in Southern Africa) and explore the degree to which projects or aid activities actually did, or did not, contribute. This might contextualise things better and be somewhat easier to do then a more egocentric ‘project-out’ approach i.e one that works out from a particular project and desperately seeks some long term impact.

  9. Duncan,

    I’m glad you raised this issue as it’s one of the few topics in the aid world that makes me smile. DFID have rather courageously funded us to run evaluations of two systemic change programmes in Ethiopia. The evaluations are 11 and 12 years and look at land reform and industrial development respectively. Both include 5 year ex-post impact assessments. The fact that they are longitudinal makes tracking attribution a lot easier.
    Attribution is seen as a problem in systemic change programmes anyway but our strategy for overcoming this allows us to look at the simultaneous aid programmes, failures, successes and external shocks which can all lead to change. For example, we’re conducting wide-ranging quant/qual industry wide censuses repeated across the 12 years. We’ll look at what the programme does, what others do and how it networks facilitate the transition of change across value chains.
    Anyway, I don’t want to get into a methodological vortex but suffice it to say that it can be done and is being done (I hope!). The baseline report will be out in the next few months and should give an indication of how we can accurately assess the sustainability of impact.

  10. I feel it is worth mentioning the NGO Water for People, which monitors all of its results for at least 10 years after they finish their efforts, and also aim to eventually embed monitoring systems within local institutions.
    Perhaps the nature of water and sanitation service provision lends itself to long-term monitoring more than more complex initiatives, but still an example to learn from.
    http://reporting.waterforpeople.org/

  11. I think part of the problems is to take a specific intervention as the starting point. To me, to really assess how much a specific intervention has contributed to change (hopefuly positive) one would have to start the other way round i.e. what has changed ‘across the borad’ and not and in what ways and why/what are key factors of change, in different locations supposed to have been covered by the intervention, and try and discern from there how that specific intervention fitted with the rest (other interventions, local dynamics, broader macro change trends etc.). Longitudinal, cross-sectoral research.

    I’ve just seen, refreshing my browser, that this was already suggested in the first comment. Great minds think alike

  12. The Inter-American Foundation visits a handful of communities five years after IAF funding ends. Summaries of 13 ex-post assessments are posted on our website (www.iaf.gov), and we’ll post another eight soon. We’re not a trying for a rigorous impact evaluation, nor concerned about attributing success to us, per se. The point is to explore with our former grantee partner and community members who were involved in the project what happened, what worked and didn’t work, and why. Methodologically, it helps that the ex-post assessment is able to build on data on a few key indicators collected at the outset of our involvement with the grantee, then twice each year during the project period and summarized in a project history at the end.
    Because IAF grants respond to initiatives conceived and carried out by local organizations, we don’t face the problem of long-gone project staff mentioned by others, and we find many people who remember the project and what it was like to work with the IAF. We often hear that the grantee organization was able to get funding from others after IAF funding ended, and that this owes in part to capabilities strengthened when working with the IAF. The results are quite substantial in some cases, measured in major expansion in export volume of local products, for example. We’ve also found a few failures, and they have also been instructive. Each year, we adjust how we conduct the ex-posts to engage more of our team and facilitate collective learning.

  13. Duncan, thanks for this refreshing blog!

    In my experience on of the key issues are really that staff leave their local NGO (government & international NGOs pay better salaries…) and there is no institutional memory.
    Well, secondly it´s our fault as donors because we request our local partners to break down their interventions into artificial blocks of usually three years.
    Another problematic issue until today is the lack of proper data collection. We can find a lot of data graveyards but not necessarily the right information. Useful baselines are still a mayor problem.So what can be done (with or without an external evaluation)?

    There is e.g. a beautiful collection of small tools that can be applied directly with communities. For example “lifelines” are a powerful tool to measure contributions in case you “forgot” to collect your data:

    “Lifeline gives the participants and the facilitators a good idea of the development in a community. Lifelines can also show experiences and the history of people, organizations or communities. Community members are asked in a meeting what time they can remember in their community. Often a significant event is taken as starting point (a drought, a bumper harvest, violent clashes, the building of a road etc.). People name those years that have been best and worst. The best are given 5 point: the highest rating. The worst get 1 point: the lowest rating. Then all other years are given between 1 and 5 points. A graphic description of developments is created. The rating is what makes it different from conventional time-lines. The discussion in the community generates much information (and reflection) on what caused the developments. Community members raise their level of awareness on the situation in the community. The explanations they give are as important to note as the figures themselves…”

    http://www.ngo-ideas.net/tiny_tools/

  14. Great post, Duncan. Totally agree. Gender sensitisation projects similarly tend to be evaluated six months after the trainings, which tells us little about how egalitarian ideas influence people over the life course.

    My approach was to interview people of all ages, ask if they’d heard of ‘gender’, when they learnt about it, what they thought of that programme etc. I then triangulated this with observations of different programmes. My aim was to find out what kind of sensitisation was most effective and why some people seemed more persuaded than others.

    If you’re interested: http://www.sciencedirect.com/science/article/pii/S0016718514002541

  15. We do have to try and measure the long-term effects of our programmes. However, I find several barriers:

    a) Our programmes are not the only element acting on the field. External forces (global/national economy winds, natural disasters, political and demographic changes…) could be stronger. When measuring, we may find noise is bigger than signal.

    b) Aid Agencies and the different kind of donors may have a limited interest in long-term evaluation. NGO should warn donors: “Thank you for your support. Please kindly wait for ten years to be told about your donation’s effect”. What would happen?

    c) Even companies are not very eager to long-term anything. They plan and operate for the next quarter or for the current fiscal year. This is especially true for bonuses (not only for the top management). Optimizing the current period of reporting can very well kill you for the next period, but this is widely ignored by business people.

    d) Did anyone mention Climate Change?

    Long-term thinking is very good. And someone must be the first one to do it, I guess…

  16. Duncan, indeed, daunting practical and methodological challenges if one aspires for longitudinal impact assessments and evaluations using the standard methodological toolbox. I agree with posters above on exploring the range of approaches that are being tested and used by colleagues in the field. See for example the work on measuring collective impact by FSG’s http://www.fsg.org/approach-areas/collective-impact). Chris Roche mentioned process tracing which Oxfam had combined with outcome mapping to document program outcomes (see Oxfam GB’s Effectiveness Review from Tanzania on Citizen Voice Outcome Indicator) – incidentally, on top of my reading pile this week!

    At CDA, we advocate for use of systems thinking in evaluations of peacebuilding programs along with examining both ‘collective impact’ and ‘cumulative impact’ of socio-political and development efforts by multiple local and international actors. Later this year, we will release a report on cumulative impacts of peacebuilding based on analysis of 16 case studies (as wide ranging as N. Ireland, Solomon Islands and Guatemala). Similarly, we took the cumulative impact angle while examining aid recipient’s perspectives on international assistance during the Listening Project exercises in 20 countries (some had seen 5+ decades of international assistance.) These learning processes were by all means purely qualitative, focused on joint learning, analysis and reflection and not evaluative.

    In 2012, Oxfam invited CDA to do a longitudinal listening exercise in Tamil Nadu, with a focus on “How did Oxfam assistance make a difference in people’s lives during and after the Asian tsunami of 2004?” I was accompanied by staff from the Secretariat and Oxfam India. Indeed, we engaged in retroactive inquiry about what was indeed “done” and “how things were decided” 8 years earlier by 6 different Oxfam affiliates who responded in Tamil Nadu. A fascinating listening exercise and insights into long-term effects of Oxfam’s decision to register newly rebuilt houses to women, or to distribute boats to non-fishing castes. A complicated story of long-term impacts– social (i.e. gender roles, women empowered through property titling which was reinforced by changes in Indian govt policies), intercommunal (i.e. effects on caste relations due to changes in labor equation after too many boats were distributed), environmental (overfishing), etc. More here: https://www.oxfam.org/en/research/listening-exercise-report-tamil-nadu-southern-india Again, not a research study but I would have wished someone could explore these topics further and deeper!

    On attribution analysis within longitudinal evaluation… use caution? There is an important distinction between examining the cause of a known effect versus looking at the effect of a known cause (e.g. change in govt policy on registering property to both men and women, or to women only). In this case, the cause is the policy, and we can study the effects. Oxfam’s contribution was important on that front – people spoke about it 8 years later – about Oxfam modeling and advocating for this. But the “sociopolitical change” river is wide and many other streams flow into it! Tracing from “women perceive a shift in household power dynamics” (effect) to identify the “cause” of this and attribute “credit”, is not just harder but perhaps also futile…

  17. In the past couple of years I’ve been back to countries I used to work in 10-15 years ago. Although the HIV projects I worked on (i.e. the 3 or 5 year funded logframes) are long gone, I like to ask while I am there if there is any sign or legacy of what we did back then. Not whether the impact of the specific services we provided endures today (at the end of the day when you are talking about health services, once you stop providing them then the impact peters out pretty quickly) but rather, did the approaches or ideas we introduced catch on, have they informed practice now? It’s particularly important because, working in HIV, back then we were developing what were essentially small scale, pilot or “demonstration” projects. HIV responses have scaled up massively since that time and these days we are talking about ending the epidemic, so I think that part of the measure of our success should be whether today’s scaled programmes (implemented by other NGOs or by governments) ditched the useless bits and built on the good bits. Duncan’s post makes me thing this sort of appraisal should be done more systematically.

  18. I have been looking at education where the long-term is the norm.

    I am writing about refugee education projects and their effect in the home country after the return (usually poor), schools founded in emergency which have survived well many years later (surprisingly common).

    Emergency interventions which were basically fulfilling the organisation’s agenda not that of the parents and children (many peace programmes that ‘helped’ the victims but never touched the perpetrators); unnecessary psychosocial interventions when the parents said they just wanted to get back to normal, do the official exams and let the children get on with their lives.

    The key in education is that it is always longer than any NGO or UN intervention – we can find the children years later. I will publish a short paper on this shortly.

  19. Hi Duncan,

    This is a really interesting post, and something that is very close to my heart.

    I’ve been involved in emergencies work where the focus is often on the next big disaster and not what happened or was learnt after the last one. KP (formerly NWFP) that Masood refers to was one of the areas that was badly affected by the Kashmir earthquake in 2005, so I wonder if that’s part of the reason why most of the people involved in the design and implementation both in the Government and donors have long gone…? Any ideas Masood?

    That said, the comments in this post have given me hope that things are changing!

    I’m familiar with the journey that VSO is on as Janet and I have been in touch to swap ideas about post-project evaluations.

    In case anyone is interested, EveryChild is currently in the process of closing all of our international programmes through a carefully managed process, and transfering our income and assets to a new global alliance (Family for Every Child), in order to order to increase the long-term impact on the lives of children. More background to this here: http://www.theguardian.com/global-development-professionals-network/2014/feb/07/power-international-ngos-southern-partners

    We decided we wanted to evaluate the impact of these exits by looking at the situation over time. The report from the first Phase of the evaluation which was carried out last year by INTRAC is available here http://www.everychild.org.uk/intrac-responsible-exit-report

    Phase 2 of the research is due to take place in 2016, to see how things are going several years later.

    We debated how long to leave it before doing the follow-up evaluation, and decided that attribution would be become difficult after a couple of years, that ethics came into play because there would be minimal ability to respond to negative findings if we discovered that things had gone really badly later down the line, and we needed to factor in the deadline of asset handover to a new organisation. Therefore it made sense to complete the evaluation in 2016 rather than leaving the 2nd part of the evaluation any later than this. However, it would be absolutely fascinating to return to see how things are going 10 years down the line…

  20. I remember reading in the 1980s a book called “We Just Don’t Know”. Two development professionals who were keen to write a book on development success stories asked around to find out about people’s favourite effective projects so they could find out what worked and why. They then went out and looked for them. And didn’t find much. The conclusion was that We Just Don’t Know what makes things work. Of course that was the naïve bad-hair new wave 1980s. It would be interesting to repeat the exercise now…

  21. I was prompted to reply because this has just been cross posted on the World Bank Public Spaces Deliberation blog. It’s such an important issue. I find the best learnings are those which really dig into what happens over time and what works in making sure there’s a legacy. A really useful article for us was a review of the kind of media support work the BBC was doing in the early 1990s and what had – or had not – been left behind. We’ve changed our strategies since then and no longer believe, for example, that journalism training on its own is a useful thing to do. http://journalism.blogs.southwales.ac.uk/2012/09/27/radio-can-training-journalists-transform-societies/

    But you’re right Duncan, we should have got better at this by now. I remember going to this ODI debate on the “long term effects of project aid” years and years ago and wondering then why so little of this work happens. http://www.odi.org/events/117-long-term-impacts-project-aid-evidence-china

  22. Great post with much to consider. Our organisation recently completed a study of the long term impact of development interventions in the Koshi Hills of Nepal, covering a period of about 30 years. The team used a variety of methods to understand how DFID programming did or did not make a difference in the lives of people living there.

    More details can be found here: http://r4d.dfid.gov.uk/Output/194107/Default.aspx

Leave a comment

Translate »