Crowdsourcing v bribes; top temperatures; the decaffeinated Other; food sovereignty; decoupling 2.0; praying for the MDGs; referendums are cool: links I liked

October 12, 2010

What does global aging mean for development?

October 12, 2010

Is the aid industry's audit culture becoming a threat to accountability?

October 12, 2010
empty image
empty image

I’m a big fan of Rosalind Eyben, of IDS, so got her permission to cut and paste her note of a meeting she organized recently while I was Ros Eybenwandering around Ethiopia. It brought together some 70 development practitioners and researchers worried about the current trend for funding organisations to support only those programmes designed to deliver easily measurable results. Here are some highlights.

“Funding agencies are increasingly imposing extraordinary demands in terms of reporting against indicators of achievement that bear little relation to social transformation. As Andrew Natsios, former USAID Director notes, ‘those development programs that are most precisely and easily measured are the least transformational, and those programs that are most transformational are the least measurable.’

There are different views on how change happens: linear cause-effect or emergent. With linear change it is easier to imagine oneself in control and therefore claim attribution, whereas with emergent change the most we can claim is a contribution to a complex, only partially controllable process in which local actors may have conflicting views on what is happening, why, and what can be done about it. Whose voice and whose knowledge count risks being ignored when organisations report on their achievements with indicators of number of farmers contacted or hectares irrigated.

Thus ‘value for money’ becomes equated with aggregated numbers rather than with effectiveness in supporting social transformation. Symptoms are treated as goals and turned into indicators of success. A participant mentioned an encounter with a high-level official who said, ‘I want a simple problem with a simple solution so that I can measure value for money.’

Why are many funders – philanthropic foundations as well as government ministries – placing ever-greater stress on demonstrating tangible results in terms of aggregate numbers?

Supporters/taxpayers have little appetite for complex messages but international NGOs have been complicit in pretending that development is simple – the ‘goats-for-Christmas syndrome’. Organisations are competing for financing so they comply with donor requirements to meet their income targets and thereby confirm and reinforce the current trend. Aid has been around for a long time and there is increasing pressure for quick ‘wins’ to demonstrate that it works. The shift to the right in European politics puts aid flows at risk. Numbers can be very misleading but they provide a comfort blanket when reporting achievements.

What to do?

· Build counter-narratives of development and change that stress the significance of history, challenge the primacy of numbers and emphasize accountability to those who international aid exists for.

· Communicate to the general public in more innovative ways the complex nature of development by facilitating debates and expanding spaces for voices from the South, while building up our knowledge of how the public in the North understands development.

· Building on already available methods, develop different methods of reporting, so that the requirement for aggregated numbers at Northern policy level does not distort the character of programming in complex development contexts.

· Collaborate with people inside donor agencies who are equally dissatisfied with the prevailing ‘audit culture’.

· Re-claim ‘value for money’ by communicating to donors and the public that some aspects of development work are valuable even though irreducible to numbers.”

I think the conclusions try and straddle both sides of a difficult dilemma. If you are sceptical of ‘impact fundamentalism’ and fear it will drive out some good development practices, do you ‘push back’ against the demand for measurement, or try and change what is measured?

I used to argue for the former, endlessly quoting Einstein’s dictum that ‘not everything that counts can be counted; not everything that can be counted, counts’. But I am coming round to the view that unless it can be measured, it is not going to be taken seriously – so the task is twofold: to develop the best possible metrics for showing impact in terms of improved well-being, rights, empowerment etc and to work out a way of establishing a plausible link between our actions and changes in a real world of emergent change. After all, if we can’t do that, why should anyone believe (or fund) us?

But this is still all a very donor-centric argument. My colleague Martin Walsh, who attended the meeting, agrees with fellow participants that we also need to ask deeper questions – ‘accountable to whom? Who audits the auditors?’ A move to an audit culture could very easily end up with the only accountability that matters being that to the providers of the funds, rather than to the people the funds are trying to benefit. To be fair, DFID at least is adamant that it wants to ensure downwards as well as donor accountability, but once the audit genie is out of the bottle, it could prove very hard to contain. If everyone is running around collecting data to prove impact to donors, how much time will they have to worry about accountability to the people they are trying to help? So audit and accountability are not the same, and may even work against each other. On the other hand, not measuring impact invites lazy thinking and prevents us being open to challenge. Conclusion? We need to think more about this one.

Read Ros’ report on the meeting here and another account here. Last word to the wonderful Dilbert

dilbert auditing


  1. After all these years of development there is still so much to do. Would it really hurt that much to focus on measurable impact just for a little while? We know enough about things that really work to know that there are some levers we can pull that provide measurable results and contribute to emergent change. Is this just another distraction from the to do list at hand? Implementation is about discipline and focus which surely means lots of room for measurable results?

  2. Really interesting and challenging stuff Duncan, thanks. I’d agree with much of what you (and Rosalind) are saying and think it is our duty to challenge the system where its wrong or we become merely an exercise in “soft power” as a foreign policy tool.

    I just wonder whether as with most things developmental the risk is to polarise too far and see this too clearly as a pendulum of Quantity vs Quality.

    Isn’t the answer that you need both? Some work is effective and can be more easily “accounted for” and valued. It may not be societally transformational, but can certainly transform individual lives. eg number of malaria treatments distributed and used, leading to a reduction in prevalence of malaria and less infant deaths. Clear, easily understandable (at least in reporting, not in management!) and a “good thing to do”.

    But some work is as you way not that easy – and as usual it should be the role of specialists to challenge the (perhaps understandable in times of domestic austerity) slide towards doing “what’s easy” with the call for doing “what’s needed”?

    I’d just argue that its not always the case that doing something “simple” is just about being accountable to donors – the malaria eg is a really clear example of being accountable to each individual who’s life has been saved. Perhaps the real quality drive there is to those who weren’t reached and what quality improvements are needed in the health system and beyond to do more next time.

    I just think there is a risk here that we put a wedge between donors and “beneficiaries”, with ourselves in the middle telling one side “you want the wrong thing” and the other “We know what you need and its complex, bear with us while we work on designing it for you”.

    Duncan: wise words and useful nuance, thanks Dominic

  3. I am mostly with Jac on this one. Development is not a science, and a lot of things we don know. Other things that are important are difficult to measure.

    However, a few things can be measured and have proven positive effects. If we stop “development”to consider as an Holistic pursuit, we could accept we have different specific objectives with each a logic of their own. At that stage, it is important to provide a good chunk of our energy to evidence based development initiatives, and have another arena where we go after goals that are more difficult to quantify. In the second category, apart from a lot of important activities, we might also find some useless things we only do because we always did them, and the minister likes it.

  4. Duncan thanks for this post,I like it because it challenges those of us trying to communicate the value/worth of our aid work…question: do you have an example of the type of “counter-narrative” you describe near the end? As for communicating the complexity of our work, that is never a problem but as you say it needs to be innovative. Is this video an example?

    Duncan: Well these are Ros’ words not mine, Chris, so I’ll pass on your question to her. But in my book, one candidate would be historical experience, which is routinely dismissed as ‘anecdotal’ or simply ignored (e.g. Ha-Joon Chang’s Kicking Away the Ladder). And I guess you could describe the subtitle of ‘From Poverty to Power’ (‘How Active Citizens and Effective States Can Change the World) as an attempt at that kind of counternarrative.

  5. It is certainly time to examine our belief that there are technocratic, precise ways of measuring progress in order to make consequential judgments based on these measures. I agree with Eyben that the increasing obsession with abstract metrics and experimental design, stemming from a reductive, managerial approach in development, is quite far from the intimate, difficult, and complex factors at play at the grassroots level.

    As someone who has worked extensively with grassroots organizations in Africa, imposing expectations to try to evaluate every single intervention on people who are in the process of organizing at the local level is most certainly a drain on their time and scarce resources. The business sector seems to have a healthier relationship with risk in their for-profit endeavours, something we may need to explore in the development sector.

    My hope is that the dominance of quantitative statistical information as the sole, authoritative source of knowledge can be challenged so that we embrace much richer ways of thinking about development and of assessing the realities of what is happening closer to the ground. 


    Let’s always consider what is the appropriate cost and complexity needed for evaluation (especially given the size and scope of the program) and aim for proportional expectations so we ensure it remains a tool for learning, not risk-reduction. Yes, let’s pursue and obtain useful data from the ground, but at a scale at which information can be easily generated and acted upon by those we are trying to serve.

  6. Yes, from Poverty to Power is an excellent example of a counter-narrative. I liked the video, Chris. Does Oxfam America also have videos like this about how it supports advocacy, mobilisation and dialogue? It is possibly that kind of activity that is currently being most threatened by the increasingly oppressive demands from donors to report upon in terms of numbers and linear cause-effect change that ignore process and contribution, rather than attribution. In any case, we have two challenges in relation to counter-narratives (1) researching and then collecting together case studies and examples that demonstrate what an emergent approach to change means for development practice (2) communicating these effectively to a wider audience.

    With regard to the first challenge, Ben Ramalingam is currently writing a book that hopefully brings some of this material together and at our big push back meeting the Director of UNRISD led the discussion about the need for long term research into how change happens. The Pathways of Women’s Empowerment Research Consortium of which I am a member is trying to do some of this (

    In relation to the second, we (or at least, I certainly do!) need to get better at communicating complex development processes simply. One way I have tried to do this is through pieces of writing that reflect more than one perspective (see my chapter with Rosario Leon in The Aid Effect (ed D. Mosse). Are there videos out there that use this technique?

  7. Hello Duncan: I enjoyed the summary and wish I could have been there. As to counter-narratives, I believe a balanced approach that allows enlightened bureaucrats inside the development industry to begin opening up cracks is necessary. I copy below the abstract for an article that attempts to do just that in the field of rural communication (Ramirez, R. 2007. Appreciating the contribution of broadband ICT with rural and remote communities: Stepping stones toward an alternative paradigm. The information society 23: 85-94).
    This paper challenges conventional policy development and evaluation approaches that emphasize the instrumental side of technology. There is a growing gap between conventional planning and evaluation approaches for rural broadband ICTs that seek to demonstrate a direct link between investments and results on the one hand; and on the other, with evidence that the contribution of ICTs to rural economic, social and cultural wellbeing is increasingly difficult to demonstrate beyond short-term measurable indicators. The paper proposes an alternative paradigm based on socio-technical systems, stakeholder engagement, an acknowledgement of the multiple dimensions at play, and the growing evidence of unpredictability of ICTs. The paper emphasizes a perspective based on ‘contribution’, not attribution; policy-making that is both adaptive and inclusive of multiple perspectives; methodological testing of emerging evaluation methodologies, and projects as learning experiments. This alternative theoretical and policy making paradigm is encapsulated in a metaphor based on the management of natural resources where stakeholders track their own indicators of impact by reading how the system responds to a project intervention.”

    The bad habits of the development industry are not only evident in evaluation, but also in its sister field of communication. We address this in a recent book that is very much story based and jargon free:

    Quarry, W. and Ramirez, R. 2009. Communication for another development: Listening before telling. London: Zed Books.
    Book blog:

  8. Here we again have a dilemma – the financiers need a record of achievement to convince their contributors to keep the funds flowing. On the other hand we have communities that need structural change to progress. Thus, a major problem.

    I would imagine that one can achieve both by changing the emphasis on development.

    1. Start with a clearly defined development goal – the wished for end result. We can thus measure progress against this goal.
    2. Identify the two or three prior goals, to ensure the final goal.
    3. And so on.
    4. Structure the demand for funding according to these sub-goals.

    Then even the so-called immeasurable goals could be measured to the satisfaction of whoever and the Aid agencies can then also show “value for money” for development expenditure. I believe this is the only way in which one will be able to satisfy even the auditors of this world.

  9. I agree with Jac. Too much aid is packaged in hugs and smiles. Cultivating discriminating donors, people who aren’t afraid to ask where their money is going, and what evidence they have that the project their money is funding is actually helping, IS A GOOD THING. I don’t want donors who are placated with narratives “from the village” and photos. These things are necessary, but so is more quantitative evidence, and the more moving parts a project/program has, the more difficult that evidence will be to collect. Good. If it’s not difficult something is probably off. Both sides (qualitative/quantitative) cost money. The question is, and sure I’d love their to be a happy balance, because numbers don’t come close to capturing everything that happens in a development context. But if I HAVE to choose between the two, I’ll take numbers. They might be just as easy to manipulate as qualitative data, but I think fostering donors who ask more difficult questions is moving in the right direction, and I think prioritizing quantitative data helps further that goal.

  10. How much of this demand for the simple, the numerical, the quantifiable, the tabulated, can we bring back to the issue of some donor agencies radically increasing their budgets towards the 0.7% target while reducing, freezing or at best minimally increasing the number of staff within the agency?

  11. that is a nice piece articulating the dilema around the who thinking of the development process. i should think good audit and accountability is the one that considers both beneficiaries and the donors. what ever the case we need both the quality and quantity based outcomes and impacts from the projects that we fund. donors might be interested in one thing, but surel it’s implementing partners responsibilty to convince them (donors to tend to like both).

  12. We need to avoid the either/or direction this discussion seems to be taking. What can be measured should be measured if useful for strategic decisions and insights about what does or does not work. And what can be known but not measured needs rich description. And usually we need both.

    However, what I find worrying is that evaluation (and accountability as part of that) is tending towards becoming method-driven rather than question-driven. The past two years or so have seen the idea of an hierarchy of methods emerging that places certain kinds of quantitative methods above others, irrespective of their relevance and ability to answer important questions. And the spin on it is that these methods are supposedly more rigorous. But rigour is only defined in terms of a technical, statistical rigour. At EES in Prague last week, several examples from AFD, GTZ and Cordaid of such methodological applications were given and it was pretty depressing. Expensive, methodologically flawed (because life turns out to be messy), very limited use for internal discussions. But… it meets the emerging notion that this is the only rigorous way to know impact. And that is what we need to challenge – which requires well documented cases of rigorous practice that are currently ‘lower’ in the hierarchy, plus some of the other ideas discussed in Rosalind’s notes from the meeting.

  13. some reflections:
    donors clearly want to know what is being done with their money – i don’t think anyone is saying that they shouldn’t be able to know about this. but at the same time, if the desire for easily measurable results defines what work will get funded, and if we know that many important changes cannot necessarily be easily measured, then there is obviously a major risk that important change processes don’t get funded. As has been reiterated time and again, changes in power relations in society, in the political economy or in the relations between citizens and the state and in the development of local capacity for change – are often amongst the hardest to measure – despite being, arguably, very central to ‘development’.

    many societal changes – especially those that really make a difference – take a long time. It is not merely the increased agricultural productivity today (relatively easy to demonstrate in a 3 year time-bound project) but rather how sustainable that agricultural improvement is and to what extent it leads to further improvements in livelihoods and well-being further down the line – for both those involved and society more generally – that we have to look at.

    clearly accounting for complex change is not easy. a simple household survey based on pre-determined indicators may well be incapable of capturing changes in gender relations in a community, of capturing unexpected or unintended changes (positive or negative), of capturing the changes in social cohesion and mobilisation, etc. furthermore there are major questions to be asked regarding how limited resources for monitoring and evaluation should be allocated. as someone who has worked as an employee in a grass-roots NGO I have often found myself thinking that much of the M&E these organisations are expected to do – often in the name of accountability (or accountability pretending to be ‘learning) – should really be the headache of the donor. Implementing organisations should be freed up to focus on M&E systems – or rather learning, empowerment and capacity development systems – that will enhance the quality of their work by increasing the ability of key local actors to effectively contribute to change.

    There are some serious questions that need to be asked here about what leads to change. What are the assumptions held by those on different sides of the debate? How do they match up to reality? What does the ‘evidence’ say? What even constitutes ‘evidence’ anyway? How do different forms of evidence end up being self-referential? How do we engage in dialogue across our different ideological positions? Is the direction that new trends in accountability, value-for-money and results-based-management holding open the space for the kind of genuine learning and inquiry that is so required? Or are we just witnessing a process of simplification that serves the priorities and expectations of people who can’t be bothered to engage more deeply.

    There is clearly a need to demonstrate that efforts that are being made are translating into positive change. There are multiple methodologies and pathways to achieving positive changes – and while a lot of good work has already been done on developing and promoting a diversity of approaches to capturing accounts of change, this is clearly an area that requires further innovation. Does the current emerging framework provide the space for this kind of innovation or not?

    Finally, when the nature of the desired changes are themselves contested (does anyone deny that beneath or behind the superficial consensus of, say, the MDGs there are deep rifts in the worldviews and assumptions of those committed to ‘Development’?) then is it not clear that there is a need for opening up space for dialogue, experimentation and learning? Surely that space shouldn’t be getting closed down?

  14. Without wishing to downplay the importance of distinguishing between the measurable and the immeasurable, I note the formidable obstacles to measuring the measurable for purposes of development work.

    The economist Michael Clemens explains these in his Oct. 11 blog for the Center for Global Development, referring particularly to assessing impacts of the Millennium Village Project. Generalizing Clemens’ MVP-specific points, one can say that obstacles to reliable evaluation include uncertainty whether the measured change would have occurred even absent the intervention; uncertainty whether the control subject is a good match for the experimental subject; and—this would hold especially where economic pressures on the donor community force extraction of broad conclusions from narrow interventions—that hoary nemesis of all scientific investigation, small sample size.

    Finally, there is tension between the fashionable but wholly justified demand for sustainable change and the short time spans in which the donor community often expects interventions to succeed. Clemens writes that “past village-level package interventions in poor rural areas have had short-term effects that rapidly dissipated after ten years”—yet, these days, who wants to wait ten years to see whether a development project worked?

    Could advocates of the immeasurable leverage these problems with measurement on behalf of their cause?

  15. Interesting. McKinsay consulting, in its critique of being able to forecast or predict a path to the future, is culling the idea that outcomes can be pre-determined.

    I agree with Irene that balance is important and on both sides of the argument. There is nothing intrinsic to value for money that limits its attention to numeric results framed in short time periods that exaggerates what outcomes aid can plausibly achieve in and of itself. Some of this is down to poor design, notably how projects often work around institutions and organisations to get to the results. (Something that is not an intrinsic weakness of logframes).

    The tyranny of method is becoming a distractive debate peddled by think tanks that donors core fund. The overly-ambitious attempts to assess impact by the world bank and others thirty years ago was met with universal falure. Yet the cause was not lack of rigour; it was about implausible objectives and questions based on poor and under-researched design. Something that assessments kind of assume are ok.

Leave a comment

Translate »