Tag: evaluation

Simplicity, Accountability and Relationships: Three ways to ensure MEL supports Adaptive Management

Chris Roche, a mate and mentor in all things system-y, reflects on what sounds like a Filipino version of our recent Bologna workshop. The week before Duncan was slaving away in Bologna on adaptive management I was attending an Asia Foundation ‘practitioners’ forum’ in Manila.  The focus of the event was on Monitoring, Evaluation, and […]

Read More »

The Project Cycle in Complex Systems – cartoon version

Jo Rowlands spotted this gem in a recent Intrac Newsletter. It’s drawn by Bill Crooks, based on an original concept by Nigel Simister.

Read More »

What do we know about the long-term legacy of aid programmes? Very little, so why not go and find out?

We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not? Everyone has their favourite story of […]

Read More »

Participatory Evaluation, or how to find out what really happened in your project

Trust IDS to hide its light under a bushel of off-putting jargon. It took me a while to get round to reading ‘Using Participatory Process Evaluation to Understand the Dynamics of Change in a Nutrition Education Programme’, by Andrea Cornwall, but I’m glad I did – it’s brilliant. Some highlights: [What’s special about participatory process evaluation?] […]

Read More »

Ups and downs in the struggle for accountability – four new real time studies

OK, we’ve had real time evaluations, we’ve done transparency and accountability initiatives, so why not combine the two? The thoroughlybrilliant International Budget Partnership is doing just that, teaming up with academic researchers to follow in real time the ups and downs of four TAIs in Mexico, Brazil, South Africa and Tanzania. Read the case study […]

Read More »

How realtime evaluation can sharpen our work in fragile states

Pity the poor development worker. All the signs are that their future lies in working in ‘fragile and conflict-affected states’ (FRACAS) – the DRCs, Somalias, Afghanistans and Haitis. As more stable countries grow and lift their people out of poverty, that’s where an increasing percentage of the world’s poor people will live. And (not unconnected) […]

Read More »

Can impact diaries help us analyse our impact when working in complex environments?

One of the problems about working in a complex system is that not only do you never know what is going to happen, but you aren’t sure what developments, information, feedback etc will turn out (with hindsight) to be important. In these results-obsessed times, what does that mean for monitoring and evaluation? One answer is […]

Read More »

So What do I take Away from The Great Evidence Debate? Final thoughts (for now)

The trouble with hosting a massive argument, as this blog recently did on the results agenda (the most-read debate ever on this blog) is that I then have to make sense of it all, if only for my own peace of mind. So I’ve spent a happy few hours digesting 10 pages of original posts and […]

Read More »

Theory’s fine, but what about practice? Oxfam’s MEL chief on the evidence agenda

Two Oxfam responses to the evidence debate. First Jennie Richmond, (right) our results czarina (aka Head of Programme Performance and Accountability) wonders what it all means in for the daily grind of NGO MEL (monitoring, evaluation and learning). Tomorrow I attempt to wrap up. The results wonkwar of last week was compelling intellectual ping-pong. The […]

Read More »

The evidence debate continues: Chris Whitty and Stefan Dercon respond from DFID

Yesterday Chris Roche and Rosalind Eyben set out their concerns over the results agenda. Today Chris Whitty (left), DFID’s Director of Research and Evidence and Chief Scientific Adviser and Stefan Dercon (right), its Chief Economist, respond. It is common ground that “No-one really believes that it is feasible for external development assistance to consist purely of […]

Read More »

Lant Pritchett v the Randomistas on the nature of evidence – is a wonkwar brewing?

Last week I had a lot conversations about evidence. First, one of the periodic retreats of Oxfam senior managers reviewed our work on livelihoods, humanitarian partnership and gender rights. The talk combined some quantitative work (for example the findings of our new ‘effectiveness reviews’), case studies, and the accumulated wisdom of our big cheeses. But […]

Read More »

Getting evaluation right: a five point plan

Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects. This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie) Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make […]

Read More »