The New Global Inequality Debate: “A Symbol of Our Struggle against Reality”?

October 18, 2013

Some feelgood for Monday morning. 500 Ugandan women entrepreneurs lipsynching to Jessie J.

October 18, 2013

When we (rigorously) measure effectiveness, do we want accountability or learning? Update and dilemmas from an Oxfam experiment.

October 18, 2013
empty image
empty image

Claire Hutchings, Oxfam’s Global MEL Advisor, brings updates us on an interesting experiment in measuring impact – randomized ‘effectiveness ClaireHutchings (1)reviews’.

For the last two years, Oxfam Great Britain has been trying to get better at understanding and communicating the effectiveness of its work. With a global portfolio of over 250 programmes and 1200 associated projects in more than 55 countries on everything from farming to gender justice – grappling with the scale, breadth, and complexity of this work has been the challenge. So how are we doing? Time for an update on where we’ve got to, with apologies in advance for a hefty dollop of evaluation geek-speak.

After much discussion and thought, we developed our Global Performance Framework (GPF). The GPF is comprised of two main components. The first is Global Output Reporting, where output data are aggregated annually under six thematic headings to give us an overall sense of the scope and scale of our work.

Secondly, in addition to the headline numbers, we need to drill down on the effectiveness question, which we’ve been doing via rigorous evaluations of random samples of mature projects. These evaluations – known as ‘Effectiveness Reviews’ – were launched in 2011/12 and today we’re releasing the first batch of effectiveness reviews from 2012/13, covering everything from strengthening women’s leadership in Nigerian agriculture to building sustainable livelihoods for Vietnam’s ethnic minorities.

The measurement approaches have developed quite a lot from the first year, so let’s start there. For the reviews of our ‘large n’ interventions (i.e. those targeting large numbers of people directly), we have been adapting the approach used by OPHI for the measurement of complex constructs, to measure both women’s empowerment and resilience. This has improved our overall measures.

For example, our initial women’s empowerment reviews primarily considered women’s influence in household and community decision-making. This has now been expanded to cover dimensions such as personal freedom, self-perception and support from social and institutional networks. Our resilience framework has expanded to include food security and dietary diversity.

We’ve  already written about the challenges and lessons learnt from ‘small n’ interventions (where there are too few units to allow for tests of statistical differences – such as advocacy and campaigns), but essentially we continue to learn about how best to ensure consistency in application of the process tracing protocol, and how to answer the question ‘do we have sufficient evidence to draw credible conclusions?’ We also experimented with bringing outcome harvesting and process tracing together in a review of the Chukua Hatua programme in Tanzania.

Dilbert the quantifierWe’ve smartened up the benchmarks for the humanitarian indicator toolkit and changed some of the standards we’re considering (contingency planning is out, replaced by resilience and preparedness.) And have piloted an Effectiveness Review on our own accountability (report out soon), bringing in an external reviewer to look, in depth, at the leadership, systems and practices of OGB and partner staff to reach conclusions on the ‘evidence’ available on the degree to which Oxfam’s work meets its own standards for accountability at project level.   

We’re waiting to finalise the full set of reports – the final batch will be published in November/December – so it’s premature to start presenting summary findings, but I’m keen to dive into what I think remains one of the key outstanding challenges of this exercise:  learning. The two key drivers for the GPF – accountability and learning – are often competing for attention and arguably require different approaches to the design, implementation and use of evaluations.

At the end of the first year, I think it’s fair to say that the consensus was that the GPF, and the effectiveness reviews in particular, were too heavily weighted towards ‘upward’ and ‘outward’ accountability (to donors and northern publics). So the challenge has been how to reorient them to better serve a learning agenda.

There are some noteworthy examples of where the reviews are contributing to project level learning. In Honduras, for example, the results of a review of community banks were disseminated to people in both intervention and comparison communities. In the “comparison” municipality, Oxfam’s partner organisation highlighted some points that the local government and community banks there could learn from those in the project area; in Tanzania, the effectiveness review of Chukua Hatua is being used to develop phase 3 of the programme; in the Philippines the Oxfam team has decided to use the effectiveness review as a baseline for the next phase of the project, and conduct the exercise again themselves in 2 years time.

At an organisational level, we’re starting to pull out thematic learning, as well as lessons on the design and implementatin of interventions. We’re also seeing an increasing appetite for including impact assessments in the initial programme design (rather than as an after-thought). For all that, though, I think it’s fair to say that learning from the effectiveness reviews remains a challenge (that means a problem, btw). Let’s talk through some of the main sticking points (in case any of you out there can help).

As with last year, there is a tension over the choice of projects: randomly selecting them at ‘the centre’ avoids cherry picking and arguably gives usaccountability v learning a more honest picture of effectiveness, but it doesn’t always mesh with what countries and regions most want to learn. Evaluative questions often can’t be fully explored until an intervention is mature, but by then the intervention may no longer be topical and the questions may have moved on.  Without a continuing programme, it may feel too late for evaluation to feed into learning” (it’s not, btw, there’s lots that we can draw into current and future programming).

It is also, crucially, about ownership – do the evaluation questions being asked by the effectiveness reviews sufficiently mirror the project’s own theories of change? Do project teams feel engaged by those questions, and therefore able to respond to the findings?

The evaluation designs for the ‘large n’ interventions in particular are complex and often unfamiliar to programme staff, and may fail to ‘tell the story’ in ways that are meaningful to the team’s broader understanding of their operating environment.  And while they help us to answer the question of whether or not our programme has had an impact, they often cannot explain why that is the case.

We’re working to address some of these challenges. We have revised our sampling criteria to ensure we are selecting larger, more mature projects to increase the relevance to country staff. We are undertaking more qualitative research (or linking up with already-planned qualitative evaluations) to support learning.

We are creating more chances for project teams to engage with and inform the evaluations, to more thoroughly unpack their theory of change and build understanding and ownership of the questions that the reviews are trying to answer – recognising that we need to get this up front, to ensure teams are able and willing to act on the findings.

We’re also doing more to support learning from the reviews – undertaking more follow up research, working with project teams to think through how they might act on recommendations, and at an organisational level working with the relevant thematic advisors to feed learning from the reviews into future programming. And from all the reviews, there is a lot that we can learn about about how to improve programme design and implementation more broadly.

But, at the core, I still worry that there is an inherent tension between organisational accountability and programme learning.  Are they compatible/achievable with the same tool (has anyone ever actually killed two birds with one stone?), and if so, how can an organisation get the balance right? And if not, where do we draw the line between these two agendas?


  1. I really wish you well in this effort. I hope the people living in poverty will have the real benefit somehow from this complex agenda Oxfam has undertaken.

  2. Thank you Claire for this. My experience is that accountability and learning are sides of the same coin. But a coin that is always changing its shape…Accountability could help to learn and learning could support accountability. The problem is when we do not find the correct place, the correct tension, between both of them in the form of different interests, implications and expectations
    The challenge is designing/adapting the processes or tools to respond to the different kinds of accountability and the different kinds of learning requirements, expectations, interests, needs…this implies realizing, clarifying, agreeing and accepting them. This implies/affects different kinds of people from different organisations…This is not easy, but complicated and complex and this changes in an ongoing basis…so this tool should be flexible, respondent, developmental…So my view, this line is not fix but always changing…we must learn to live, be comfortable with this high degree of incertitude…

  3. I would submit that one of the potential points of congruence between learning and accountability is around the scale of the intended audience. I often see learning that is focused just on a specific project, which has a great tension with efforts to track accountability for a given donor/implementer’s role in effecting targeted change.

    Conversely, there are sometimes efforts to learn more broadly about interventions that work at greater scale, often with lots of rigor behind them – with the type of accountability implicit in knowing whether intervention X works. At this level, the learning and accountability agendas align more, because they are in service of the same effort.

    Sadly, many of the efforts to learn at a broader scale are so dominated by particular methodological considerations toward rigor that they either lack deep explanatory power or, in my mind more significantly, they answer such narrow questions as to be uninteresting.

    The biggest challenge I see is therefore identifying what bigger pattern (or theory of change) a given project’s interventions are part of, so that what’s learned isn’t a simplistic “do citizen report cards work?” but a more useful “which aspects of the generation and application of citizen report cards enhance their effectiveness and why?”

    And as you’re doing, this mostly requires much more engagement with project teams to unpack their theories of change sufficiently to identify the aspects that are expected to be dependent not on “context” but either always true, or dependent on something (willingness of parents to challenge authority figures on behalf of children, say) that could be considered a variable across locations.

  4. I deeply sympathize with this tension. We are struggling with this as well in our M&E efforts. It is encouraging to hear other (much larger) organizations who are still trying to navigate this balance. Thanks for taking the time the put this out there and contribute to the conversation.

  5. Great article. The tension between accountability and learning can be lessened with a rigorous sort of accountability for applying what we learn instead of being embarrassed by it – which is another article in itself. I could talk about all this stuff for hours but I guess that’s the main point I’d be making. Thanks for sharing your thinking and achievements.

  6. Do you know how the Global Learning Programme is being ‘rigorously measured’? Oxfam is receiving a large amount of DfID foreign aid money to implement this programme in UK schools to UK teachers. I know the programme will be very effective in raising donations for Oxfam but I am confused about how the effectiveness on poverty reduction will be measured.

  7. Perhaps we need to accept that we do need different – but complementary – tools within a programme cycle to satisfy accountability and learning. Perhaps a “mid term review” might focus on learning for the programme and its next iteration, as well as undertake an evaluability assessment to prepare for the “effectiveness review” that then takes place at maturity and responds more specifically to the accountability agenda.

  8. For learning’s sake I shared your question with the community (you should join if you’re interested in learning! [disclosure, I’m on the voluntary core group]).

    Take too much space to summarise it all here – the discussion is on the email archive – and the general thrust is they are both sides of the same coin, but three points:

    1. ‘leave accountability to the accountants and focus on learning’, said the American Evaluation Association President at the AAEA conference in DC last week. Lots of agreement in the conversation that more focus on learning, systematically, could also encompass accountability

    2. Linking to that, a description of what sounds like a tremendous, long-term, learning programme from USAID with their partners and programmes, That level of investment and commitment can bridge accountability with learning, and implies an acceptance of joint accountability …. which leads to:

    3. ‘Perhaps the core of this discussion is learning to be accountable, for both the evaluator and the evaluated? And to be clear about who and what each is accountable to and about what? ‘ (Valerie Brown’

  9. Thank you for the excellent article.
    You wrote: “There is an inherent tension between organisational accountability and programme learning.” At the risk of sounding like an ingoranus, I do not believe the tension is between accountability and learning but between organizational and programme.
    If accountability measures whether we have achieved our outcome, it logically is the first step towards learning (i.e. how can we do better.) If we do not know what we did, we can’t know how to do better. They are the same bird independent of the number of stones.
    The challenges you described seem to be more of an aggregation challenge between organisation and programme: Let’s use a bad example: We are trying to increase GDP. The factory production manager may see energy saving as the best way to improve, the CEO might identify marketing, the Minister of Industry might believe improved infrastructure is the best way to do better. All three are right but all three use different tools as their level of aggregation is different.
    Your real achievement is that you have a common measurement (like GDP in the example) between your levels of organisation.

Leave a comment

Translate »