Let’s Talk About Sex: why sexual satisfaction & pleasure should be on the international development agenda

October 23, 2014

An authorial moment

October 23, 2014

Participatory Evaluation, or how to find out what really happened in your project

October 23, 2014
empty image
empty image

Trust IDS to hide its light under a bushel of off-putting jargon. It took me a while to get round to reading ‘Using Participatory Process Evaluation to Understand the 2012_0827_usaid_kenya_womenDynamics of Change in a Nutrition Education Programme’, by Andrea Cornwall, but I’m glad I did – it’s brilliant. Some highlights:

[What’s special about participatory process evaluation?] ‘Conventional impact assessment works on the basis of snapshots in time. Increasingly, it has come to involve reductionist metrics in which a small number of measurable indicators are taken to be adequate substitutes for the rich, diverse and complex range of ways in which change happens and is experienced. [In contrast] Participatory Process Evaluation uses methods that seek to get to grips with the life of an intervention as it is lived and perceived and experienced by different kinds of people.

[Here’s how it works on the ground, evaluating a large government nutrition programme in Kenya]

It was not long after arriving in the area that we were to show the program management team quite how different our approach to evaluation was going to be. After a briefing by the program leader, we were informed as to the fieldsites that had been selected for us to visit. Half an hour under a tree in the yard with one of the extension workers was all it took to elicit a comprehensive matrix ranking of sites, using a series of criteria for success generated by the extension worker, on which three sites we had been offered as a “range” of examples appeared clustered at the very top of the list and one at the very bottom. That we were being offered this pick of locations was, of course, to be expected. Evaluators are often shown the showcase, and having a basket case thrown in there for good measure allows success stories to shine more brilliantly. Even though there was no doubt in anyone’s minds that this was an exceptionally successful program, our team had been appointed by the donor responsible for funding; it was quite understandable that those responsible for the program were taking no chances.

It was therefore with a rather bewildered look of surprise that the program manager greeted our request to visit a named list of sites, chosen at random from various parts of the ranked list, and not the ones we were originally due to visit. What we did next was not to go to the program sites, but spend some more time in the headquarters. We were interested in the perspectives of a variety of people involved with implementation, from the managers to those involved in everyday activities on the ground. And what we sought to understand was not only what the project had achieved, but also what people had thought it might do at the outset and what had surprised them…… These ‘stories of change’ offered us a more robust, rigorous and reliable source of evidence than the single stories that conventional quantitative impact evaluation tends to generate.

participatory evaluatonOur methodology consisted of three basic parts. The first was to carry out a stakeholder analysis that allowed us to get a picture of who was involved in the program. We were interested in hearing the perspectives not just of program “beneficiaries”, but also of others – everyone who had a role in the design, management and implementation of activities, from officials in the capital to teachers in local schools.

The next step involved using a packet of coloured cards, a large piece of paper and pieces of string. It began with an open-ended question about what the person or group had expected to come out of the program. Each of the points that came out of this were written by one of the facilitation team on a card, one point per card, and extensive prompting was used to elicit as many expectations as possible. The next step was to look at what was on the cards and cluster them into categories.

Each of these categories then formed the basis for the next step of the analysis, which was to look at fluctuations over time. This was done by using the pieces of string to form a graphical representation, with the two-year time span on the x axis and points between two horizontal lines representing the highest and lowest points for every criterion on the y axis. What we were interested in was the trajectories of those lines – the highs and lows, the steady improvement or decline and where things had stayed the same. We encouraged people to use this diagram as a way of telling the story of the program, probing for more detail where a positive or negative shift was reported.

The third step was to use this data as a springboard to analyse and reflect on what had come out of their experience of the project. We did this by probing for positive and negative outcomes. These were written onto cards. We asked people to sort them into two piles: those that had been expected, and those that were unexpected. We then spent some time reflecting on what emerged from this, focusing in particular on what could have been done to avoid or make more of unexpected outcomes and on the gaps, where they emerged, between people’s expectations and what had actually happened. We kept people focused on their own experience, rather than engaging in a more generalized assessment of the program.

[What kind of reaction did they get?] “We’ve never had visitors coming here who knew so much,” one said to us. Another confided that it’s easy enough to direct the usual kind of visitor towards the story that the program team wanted them to hear. Development tourists, after all, stay such a short time: “they’re in such a rush, they go to a village and say they must leave for Nairobi by 3 and they [the program staff] take them to all the best villages.”

[What kind of things did they uncover?] One of the most powerful lessons that the program learnt came from a very unexpected reaction to something that was utterly conventional: a baseline survey. A team of

measuring arm sizeenumerators had set out to gather data from a random sample of households, such as height for weight and upper arm circumference measurements. At the same time, a rumour was sweeping the area about a cult of devil-worshippers seeking children to sacrifice. Families greeted the enumerators with hostility. People in the communities likened the measurement kits developed for ease of use to measuring up their children for coffins. The survey proved difficult to administer. In one place, the team were chased with stones.

To get things off the ground again, the program needed the intercession and the authority of the area’s chiefs to call their people and explain what the program was all about and what it was going to do for the area. What was so striking about the stories of this initial process of stumbling and having to rethink was that it simply had not occurred to the researchers that entering communities to measure small children might be perceived as problematic.’

Brilliant, and there’s lots more in the paper. Whenever I read anything by Andrea, I wish we had more ‘political anthropologists’ like her writing about development. But maybe we should lend them some subeditors to jazz up their titles a bit.

8 comments

  1. The paper is excellent, but I’d like to pick up on the opening statement about the ‘near hegemony of experimental forms of impact evaluation’. A commonly raised concern – but is this actually true? It’s not really my experience of DFID, nor of the evaluation community at large.

    Is there any evidence, beyond the anecdotal, that experimental impact evaluation really has this status?

    A well trodden discussion path I know, but it still startled me to see this paper taking such a strong stance.

    Aidleap

  2. Great account of an important process. Would be good to see more such participation in the design of what constitutes ‘success’ – especially when that might differ from what funders would define as success…

  3. Excellent. Enjoyed reading the paper – lots of great insights. What I particularly liked was that at the end of the work the communities themselves felt better informed and more powerful.

  4. Thanks Duncan, much appreciated and I am so glad you liked the paper. You’re right, it’s such a D-U-L-L title but I was trying to choose something that described in development speak what the paper was about thinking it might be picked up that way rather than – as you point out – being buried in jargon. How awful! But easily changed, I’d hope, and I’d love to hear suggestions for sparky new titles!

  5. Very interesting to read the detailed description of the participatory process evaluation approach. The worrying preference for quantitative results over all others is something I hope is changing. I think we may be in the early stages of a return swing of the pendulum towards more mixed methods, and therefore more interest in participatory methods, such as that described by Andrea in the paper, the Big Push forward initiative, and Beneficiary Assessment, a kind of participatory impact assessment garnering increasing interest within the Swiss Development and Cooperation Agency (SDC), among others. The latter involves identifying and working with primary stakeholders in a given project context as the main impact assessment researchers. One of the challenges we have had in using this approach is to ensure sufficient TIME is available to develop a good understanding of the context and to get real engagement, though so far it seems household and community members discussing impact with peers (ie. members of similar communities) is a promising way to cut through some of the barriers of having external evaluators being the main drivers of impact assessments. Andrea, thanks for sharing your reflections, which provide a lot of food for thought. :-)

  6. Very, very interesting, I see many similarities with the approach that we are taking in the ART project (Assessing Rural Transformations) using the QUIP (Qualitative Impact Protocol), and not just a lack of good titles! Please do read the latest paper based on our experience of piloting the approach: http://www.bath.ac.uk/cds/projects-activities/assessing-rural-transformations/documents/art-piloting-the-quip-d4.pdf
    The full QUIP guidelines are here: http://www.bath.ac.uk/cds/projects-activities/assessing-rural-transformations/documents/complete-quip-guidelines.pdf

    It’s also worth mentioning the PaDev project (Participatory Assessment of Development) in the Netherlands, some very interesting work: http://www.padev.nl/

  7. It is a good read for sure and I am glad that there is one published by an academic organisation which is getting attention. But it is not unique in terms of existing practice, as some of the comments highlight, which have not been picked up and shared by FP2P. This kind of work took place quite a bit in the 1990s – see PLA Notes. Where are more recent accounts lurking? Having just written a briefing paper for UNICEF on participatory impact evaluation (out soon), I found it very hard to find recent material to reference (was looking for youth AND impact but even general accounts such as Andrea’s are rare). Why are existing participatory evaluation processes not more publicly shared, are they not documented (if not, why not?) – or have they really not been undertaken much for the past decade or more?

Leave a comment

Translate »