evaluation

What do we know about the long-term legacy of aid programmes? Very little, so why not go and find out?

Duncan Green - May 21, 2015

We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not? Everyone has their favourite story of the project that turned into a spectacular social movement (SEWA) …

Continue reading

Participatory Evaluation, or how to find out what really happened in your project

Duncan Green - October 23, 2014

Trust IDS to hide its light under a bushel of off-putting jargon. It took me a while to get round to reading ‘Using Participatory Process Evaluation to Understand the Dynamics of Change in a Nutrition Education Programme’, by Andrea Cornwall, but I’m glad I did – it’s brilliant. Some highlights: [What’s special about participatory process evaluation?] ‘Conventional impact assessment works on the basis of snapshots in …

Continue reading

Ups and downs in the struggle for accountability – four new real time studies

admin - September 5, 2013

OK, we’ve had real time evaluations, we’ve done transparency and accountability initiatives, so why not combine the two? The thoroughlybrilliant International Budget Partnership is doing just that, teaming up with academic researchers to follow in real time the ups and downs of four TAIs in Mexico, Brazil, South Africa and Tanzania. Read the case study summaries (only four pages each, with full versions if you …

Continue reading

How realtime evaluation can sharpen our work in fragile states

admin - September 2, 2013

Pity the poor development worker. All the signs are that their future lies in working in ‘fragile and conflict-affected states’ (FRACAS) – the DRCs, Somalias, Afghanistans and Haitis. As more stable countries grow and lift their people out of poverty, that’s where an increasing percentage of the world’s poor people will live. And (not unconnected) they are the hardest places to get anything done – …

Continue reading

Can impact diaries help us analyse our impact when working in complex environments?

admin - July 8, 2013

One of the problems about working in a complex system is that not only do you never know what is going to happen, but you aren’t sure what developments, information, feedback etc will turn out (with hindsight) to be important. In these results-obsessed times, what does that mean for monitoring and evaluation? One answer is to keep what I call an ‘impact diary’, where you …

Continue reading

So What do I take Away from The Great Evidence Debate? Final thoughts (for now)

admin - February 7, 2013

The trouble with hosting a massive argument, as this blog recently did on the results agenda (the most-read debate ever on this blog) is that I then have to make sense of it all, if only for my own peace of mind. So I’ve spent a happy few hours digesting 10 pages of original posts and 20 pages of top quality comments (I couldn’t face adding …

Continue reading

Theory’s fine, but what about practice? Oxfam’s MEL chief on the evidence agenda

admin - February 6, 2013

Two Oxfam responses to the evidence debate. First Jennie Richmond, (right) our results czarina (aka Head of Programme Performance and Accountability) wonders what it all means in for the daily grind of NGO MEL (monitoring, evaluation and learning). Tomorrow I attempt to wrap up. The results wonkwar of last week was compelling intellectual ping-pong. The bloggers were heavy-hitters and the quality of the comments provided …

Continue reading

The evidence debate continues: Chris Whitty and Stefan Dercon respond from DFID

admin - January 23, 2013

Yesterday Chris Roche and Rosalind Eyben set out their concerns over the results agenda. Today Chris Whitty (left), DFID’s Director of Research and Evidence and Chief Scientific Adviser and Stefan Dercon (right), its Chief Economist, respond. It is common ground that “No-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions.” Neither would anyone argue that power, politics and …

Continue reading

Lant Pritchett v the Randomistas on the nature of evidence – is a wonkwar brewing?

admin - November 21, 2012

Last week I had a lot conversations about evidence. First, one of the periodic retreats of Oxfam senior managers reviewed our work on livelihoods, humanitarian partnership and gender rights. The talk combined some quantitative work (for example the findings of our new ‘effectiveness reviews’), case studies, and the accumulated wisdom of our big cheeses. But the tacit hierarchy of these different kinds of knowledge worried …

Continue reading

Getting evaluation right: a five point plan

admin - October 25, 2012

Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects. This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie) Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make an impassioned plea for evidence and rigor in their evidence …

Continue reading