How does Change Happen in global commodities markets? The case of Palm Oil

August 5, 2015

How do we get better at killing our darlings? Is scale best pursued obliquely? More thoughts on innovation and development

August 5, 2015

The Politics of Results and Evidence in International Development: important new book

August 5, 2015
empty image
empty image

The results/value for money steamroller grinds on, with aid donors demanding more attention to measurement of Politics of Evidence book-coverimpact. At first sight that’s a good thing – who could be against achieving results and knowing whether you’ve achieved them, right? Step forward Ros Eyben, Chris Roche, Irene Guijt and Cathy Shutt, who take a more sceptical look in a new book, The Politics of Results and Evidence in International Development, with a rather Delphic subtitle – ‘playing the game to change the rules?’

The book develops the themes of the ‘Big Push Forward’ conference in April 2014, and the topics covered in one of the best debates ever on this blog – Ros and Chris in the sceptics corner took on two gung-ho DFID bigwigs, Chris Whitty and Stefan Dercon.

The critics’ view is suggested by an opening poem, Counting Guts, by P Lalitha Kumari after she attended a meeting about results in Bangalore, which includes the line ‘We need to break free of the python grip of mechanical measures.’

The book has chapters from assorted aid workers about the many negative practical and political consequences of implementing the results agenda, including one particularly harrowing account from a Palestinian Disabled People’s Organization that ‘became a stranger in our own project’ due to the demands of donors (the author’s skype presentation was the highlight of the conference).

But what’s interesting is how the authors, and the book, have moved on from initial rejection to positive engagement. Maybe a snappier title would have been ‘Dancing with Pythons’. Irene Guijt’s concluding chapter sets out their thinking on ‘how those seeking to create or maintain space for transformational development can use the results and evidence agenda to better advantage, while minimising problematic consequences’. Here’s how she summarizes the state of the debate:

‘No one disputes the need to seek evidence and understand results. Everyone wants to see clear signs of less poverty, less inequity, less conflict and more sustainability, to understand what has made this possible. Development organizations increasingly seek to understand better what works for who and why – or why not. However, disputes arise around the power dynamics that determine who decides what gets measured, how and and why. The cases in this book bear witness to the experiences of development practitioners who have felt frustrated by the results and evidence protocols and practices that have constrained their ability to pursue transformational development. Such development seeks to change power relations and structures that create and reproduce inequality, injustice and the non-fulfillment of human rights.

Dogbert the quantifierAnd yet some of these cases also recognize that the results agenda can, in theory, open up opportunities for people-centred accountability processes, or promote useful debates about value for money, or shed light on power dynamics using theory of change approaches. Some participants at the Big Push Forward event argued that greater emphasis on evidence has led to more intelligent consumption of data.’

Guijt identifies the success factors behind what you called call ‘really useful measurement’: the methods employed must be feasible, useful and rigorous, accompanied by autonomy and fairness, generate time and space for reflection on evidence of results, and agile. She goes on to explain the meaning of those rather motherhood and apple pie terms. A few excerpts:

‘Particularly detrimental is the wasted effort invested in collecting incorrect or unused results data’. She argues that by contrast, ‘soft data’ can be particularly useful, such as programme managers personally listening to the views of children and young people.

What is Rigour?: ‘While much of the rigour debate focuses on whether data is rigorous, we should focus on seeking more rigorous thought processes and method selection and use … the term ‘rigour’ needs to be reclaimed beyond narrow method-bound definitions to encompass better inclusion of less powerful voices and improved analysis of power, politics, assumptions and resource allocation’.

‘Results and evidence approaches should strongly emphasize reflection about what is known and what needs furtherGandhi v logframe cartoon inquiry. Asking people to transcribe results data rather than making sense of that data is increasingly seen as an entrenched problem.’

She finishes by setting out a positive agenda of seven strategies

‘Develop political astuteness and personal agency’: ‘people’s ability to use the results and evidence agendas positively makes them activists within their organizations, and with funding agencies and partner organizations’

‘Understand dynamic political context and organizational values’: learn to advocate within your own organization

‘Identify and work with what is positive about the results and evidence agenda’:  this is the meaning of the book’s subtitle ‘playing the game to change the rules?’

‘Facilitate front-line staff to speak for themselves’: always powerful as a results focus can technocratize issues and diminish the voices of those on the ground

‘Create space for learning and influence’

‘Advocate for collective action’ – the good guys need to work together to perform the kind of ju jitsu on the results agenda set out in the previous strategies

‘Take advantage of emerging opportunities’: don’t just complain; embrace new converts in the mainstream, despite their irritating habit of claiming that they of ‘doing development differently’ first……

Some previous posts on the measurement debate here, here and here

3 comments

  1. Hi Duncan,

    Thanks for a fascinating post, as always. I agree with the seven strategies in the post, and would add another; recognise the different requirements of funders and implementers, and try to incorporate them into a shared framework.

    For example, funders typically want to understand what their programmes are doing, and be reassured that there will be some way in which results can be measured. Implementers need a way to challenge their theory of change and revisit their assumptions, without it becoming a tick-box exercise. They need to gather data which confirms or challenges their theory, and explore unexpected changes from their intervention.

    The DCED Standard for Results Measurement evolved from attempts by field staff to reconcile these tensions. It starts with the use of results chains – which if used as an accountability tool can be inflexible and obstructive (witness the sad fate of the logframe.) If used right by field staff, though, it can encourage reflection and thinking, and allow implementers to revise their programmes on a regular basis, informed by ongoing learning. I think a key success factor is to have the process driven by field staff who want to make their programmes as effective as possible.

    Capacity is a major challenge in implementing such a results measurement system. Capacity challenges arise at all levels. Funders typically have a limited understanding of the practical challenges of measuring results, nor what their favoured approaches really cost. Implementers are often lacking in important technical skills, and often get too busy running projects to step back and think about their logic.

    Adam

  2. Hi Duncan,

    I strongly believe your suggestions nail it and we can work to develop a model that fine tune your advice. Result measurement seem complex and may require different approaches depending on the contexts. Thanks anyway!

  3. Development practice seems caught between those who argue that development reality is so complex that measuring results is near impossible and perhaps a major distraction from the ‘real work’, and those who argue that, perhaps because development outcomes can look patchy at times, rigorous assessments are the only way to legitimize development investments in the eye of the public.

    I sympathize with those who warn against oversimplifying development—because it simply isn’t so. I also sympathize with those who argue that large investments of public funds must be justified—one way or the other. And I I also empathize with practitioners who, frustrated by unhelpful protocols—often based on a very selective interpretation of academic rigor that does not deal well with societal complexity—feel that the tail has started to wag the dog.

    While I sympathize with all these viewpoints I also fear that each may draw the wrong conclusions. We are at risk of throwing the baby away with the bath water. Results measurement in development is not a question of ‘if’, but ‘how’. The blog ends with very valuable pointers for this, but I wonder whether the authors realize that we already have a Standard for results measurement that lists the minimal elements of good practice in results measurement that programs can adopt to deliver against all these points.

    At first sight the DCED Standard for results measurement may offer the worst of both worlds. Most eye catching are the results chains or theories of change it propagates: are these not too linear and simplistic to cover change in a complex environment? And the mixed research methods it advocates, partially applied by programs in house: can they produce more than subjective ‘anecdotal evidence’?

    In reality the Standard offers solutions for the worries of all sides. Results chains are tools to manage complexity, not to ignore it. Making a results chain is not about ‘keeping it simple'; rather everything relevant for the change process MUST be incorporated. It instigates a thought process that helps the practitioner protect himself from not seeing the forest for the trees. As such it generates a research agenda, a ‘measurement plan’ that is manageable, practical and relevant for the practitioner. The tail no longer wags the dog.

    This measurement plan allows the practitioner to measure in a timely manner, applying the most suitable research tool for each question and social setting. In doing so it avoids the pitfalls of having to work with large quantitative surveys, which often need to measure too much (development is complex after all) and too late into the development process whereby the beneficiary cannot remember relevant information and the practitioner has no time left to learn and improve his efforts.

    Instead, if well implemented, it produces a robust body of evidence collected from many sources, measured at many points in time, and compiled by practitioners who spend sufficient time in the field to be deeply informed about the intended and the unintended aspects of the change process within as well as beyond the household. This body of evidence, collected in real time, allows practitioners to learn about what works and what does not and to mitigate undesirable outcomes. At the same time this body of evidence can be used as a valuable stepping-stone for an independent evaluation if required.

    Thus we have a system that allows us to handle complexity, is designed to be practical and to empower practitioners, and allow us to analyze results for internal program feedback loops as well as external audiences. If well implemented it sets out to achieve what the authors would like to see: well-informed dynamic practitioners, able to work with the local community in all its variety, able determine what needs to be analyzed and how, able to learn and improve, and able to capture opportunities for a more effective development effort.

    Thus, much of the ‘how’ of results measurement has in fact been answered, by a system that was developed by practitioners who were the first to be confronted by the need to reconcile complex change processes with the need for manageable and useful results measurement systems and the need to produce credible results. The next frontier is not how to do results measurement, but to ensure we design programs sufficiently responsive, resourced and ‘empowered’ to implement this system well.

    Dr. Harald Bekkers,
    Team Leader Market Development Facility

Leave a comment

Translate »