What do DFID wonks think of Oxfam's attempt to measure its effectiveness?

October 24, 2012 2 By admin

More DFIDistas on the blog: this time Nick York, DFID’s top evaluator and Caroline Hoy, who covers NGO evaluation, comment on Oxfam’s publication of a set of 26 warts-and-all programme effectiveness reviews.

Having seen Karl Hughes’s 3ie working paper on process tracing and talked to the team in Oxfam about evaluation approaches, Caroline Hoy (our lead on evaluation for NGOs) and I have been reading with considerable interest the set of papers that Jennie Richmond has shared with us on ‘Tackling the evaluation challenge – how do we know we are effective?’.

From DFID’s perspective, and now 2 years into the challenges of ‘embedding evaluation’ in a serious way into our own work, we know how difficult it often is to find reliable methods to identify what works and measure impact for complex development interventions.  Although it is relatively well understood how to apply standard techniques in some areas – such as health, social protection, water and sanitation and microfinance – there are whole swathes of development where we need to be quite innovative and creative in finding approaches to evaluation that can deal with the complexity of the issues and the nature of the programmes.  Many of these areas are where NGOs such as Oxfam do their best work.

So we would really like to welcome and applaud Oxfam’s new Effectiveness Reviews, which adopt a clear and practical framework for assessing what difference it is making, through its partners, in the development process. It is a big step forward for them – and it would be great if it also inspires other organisations to develop new and interesting approaches to measuring results and undertake rigorous analysis of what works.  Clearly this needs to be done in a way which each organisation can afford and resource – things need to be done in a proportionate way – but the Oxfam initiative shows some of what is possible.

They have chosen quite a practical strategy – picking out a random sample and then probing more deeply and using different techniquescartoon-evaluation_culture to measure impact or use well-tried monitoring of performance indicators.

Of course there is one potential drawback – random sampling may mean there are gaps in what you can say, if key areas don’t happen to have been sampled this time.  Oxfam also notes that the reviews do not necessarily enable full understanding of why a programme is successful (e.g. in Pakistan) and that they now need to go back and undertake some more work.   One way round this is more purposive sampling – we don’t know if this was considered –  or identifying priority themes up front based on what the organisational objectives are, and focusing on them in some depth.  The key challenge is finding a strategy for using the limited resources for evaluation and data collection in a targeted way that gives a nice balance between extensive coverage and intensive analysis.

Another challenge is maintaining the independence and integrity of those carrying out the evaluations.   Finding impartial observers – given that many people and experts have worked for years in these areas and know each other well – can be difficult.

The very interesting study of policy influencing by Oxfam’s partner in Bolivia, Fundacion Jubileo, is worth looking at in some detail.   It made a good case that the grantee was really having an impact on some key aspects of social change in Bolivia The evaluators clearly applied the process tracing technique skilfully and identified the most significant changes – but it must have been difficult to stay objective when doing the interviews, working with the grantee and identifying who was really influencing whom.  Howard White and Daniel Phillips’s paper on ‘small n’ techniques talks a lot about the biases that one needs to avoid in using these techniques.  The appendix to the study provides an excellent and useful set of reflections on the use of the process tracing methodology and what the evaluators learned.

One key assumption is that by doing more work and collecting more data (e.g. from comparison sites in Zambia and the Philippines), they will be able to understand and demonstrate impact.   Actually, based on discussions we have had with Michael Woolcock recently in DFID, we have started to ask a different sort of question.    In some types of programmes, more data and more work may not be the solution Gandhi v logframe cartoon– more innovative methods and approaches to understanding impact can be required and if the programme itself develops as you implement it then the goal posts are continually shifting too.

Looking ahead, and thinking about the next stages of this agenda….first, we would encourage others to share their approaches and experiences in the way that Oxfam has done.   Second, it would be great to see Oxfam and other NGOs sharing resources to develop better methods across the sector, given their common challenge of demonstrating results and the limited resources.   The results agenda is particularly challenging for smaller organisations, whose inputs are increasingly recognised – so can we ask if Oxfam sees itself as in a position to demonstrate leadership in linking with such organisation to jointly share and explore results?

Nick York is DFID’s Chief Professional Officer – Evaluation and Caroline Hoy, its Results and Evaluation Specialist, Civil Society Department.