I’m a big fan of Rosalind Eyben, of IDS, so got her permission to cut and paste her note of a meeting she organized recently while I was wandering around Ethiopia. It brought together some 70 development practitioners and researchers worried about the current trend for funding organisations to support only those programmes designed to deliver easily measurable results. Here are some highlights.
“Funding agencies are increasingly imposing extraordinary demands in terms of reporting against indicators of achievement that bear little relation to social transformation. As Andrew Natsios, former USAID Director notes, ‘those development programs that are most precisely and easily measured are the least transformational, and those programs that are most transformational are the least measurable.’
There are different views on how change happens: linear cause-effect or emergent. With linear change it is easier to imagine oneself in control and therefore claim attribution, whereas with emergent change the most we can claim is a contribution to a complex, only partially controllable process in which local actors may have conflicting views on what is happening, why, and what can be done about it. Whose voice and whose knowledge count risks being ignored when organisations report on their achievements with indicators of number of farmers contacted or hectares irrigated.
Thus ‘value for money’ becomes equated with aggregated numbers rather than with effectiveness in supporting social transformation. Symptoms are treated as goals and turned into indicators of success. A participant mentioned an encounter with a high-level official who said, ‘I want a simple problem with a simple solution so that I can measure value for money.’
Why are many funders – philanthropic foundations as well as government ministries – placing ever-greater stress on demonstrating tangible results in terms of aggregate numbers?
Supporters/taxpayers have little appetite for complex messages but international NGOs have been complicit in pretending that development is simple – the ‘goats-for-Christmas syndrome’. Organisations are competing for financing so they comply with donor requirements to meet their income targets and thereby confirm and reinforce the current trend. Aid has been around for a long time and there is increasing pressure for quick ‘wins’ to demonstrate that it works. The shift to the right in European politics puts aid flows at risk. Numbers can be very misleading but they provide a comfort blanket when reporting achievements.
What to do?
· Build counter-narratives of development and change that stress the significance of history, challenge the primacy of numbers and emphasize accountability to those who international aid exists for.
· Communicate to the general public in more innovative ways the complex nature of development by facilitating debates and expanding spaces for voices from the South, while building up our knowledge of how the public in the North understands development.
· Building on already available methods, develop different methods of reporting, so that the requirement for aggregated numbers at Northern policy level does not distort the character of programming in complex development contexts.
· Collaborate with people inside donor agencies who are equally dissatisfied with the prevailing ‘audit culture’.
· Re-claim ‘value for money’ by communicating to donors and the public that some aspects of development work are valuable even though irreducible to numbers.”
I think the conclusions try and straddle both sides of a difficult dilemma. If you are sceptical of ‘impact fundamentalism’ and fear it will drive out some good development practices, do you ‘push back’ against the demand for measurement, or try and change what is measured?
I used to argue for the former, endlessly quoting Einstein’s dictum that ‘not everything that counts can be counted; not everything that can be counted, counts’. But I am coming round to the view that unless it can be measured, it is not going to be taken seriously – so the task is twofold: to develop the best possible metrics for showing impact in terms of improved well-being, rights, empowerment etc and to work out a way of establishing a plausible link between our actions and changes in a real world of emergent change. After all, if we can’t do that, why should anyone believe (or fund) us?
But this is still all a very donor-centric argument. My colleague Martin Walsh, who attended the meeting, agrees with fellow participants that we also need to ask deeper questions – ‘accountable to whom? Who audits the auditors?’ A move to an audit culture could very easily end up with the only accountability that matters being that to the providers of the funds, rather than to the people the funds are trying to benefit. To be fair, DFID at least is adamant that it wants to ensure downwards as well as donor accountability, but once the audit genie is out of the bottle, it could prove very hard to contain. If everyone is running around collecting data to prove impact to donors, how much time will they have to worry about accountability to the people they are trying to help? So audit and accountability are not the same, and may even work against each other. On the other hand, not measuring impact invites lazy thinking and prevents us being open to challenge. Conclusion? We need to think more about this one.