Monitoring, Evaluation and Learning (MEL) used to send me into a coma, but I have to admit, I’m starting to get sucked in. After all, who doesn’t want to know more about the impact of what we do all day?
So I picked up the latest issue of Oxfam’s Gender and Development Journal (GAD), on MEL in gender rights work, with a shameful degree of interest.
Two pieces stood out. The first, a reflection on Oxfam’s attempts to measure women’s empowerment, had some headline findings that ‘women participants in the project were more likely to have the opportunity and feel able to influence affairs in their community. In contrast, none of the reviews found clear evidence of women’s increased involvement in key aspects of household decision-making.’ So changing what goes on within the household is the toughest nut to crack? Sounds about right.
But (with apologies to Oxfam colleagues), I was even more interested in an article by Jane Carter and 9 (yes, nine) co-authors, looking at 3 Swiss-funded women’s empowerment projects (Nepal, Bangladesh and Kosovo). They explored the tensions between the kinds of MEL preferred by donors (broadly, generating lots of numbers) and alternative ways to measure what has been going on.
The start by breaking down the fuzzword ‘empowerment’, into the ‘four powers’ (power within; power with; power to and power over) model best known from my Oxfam colleague Jo Rowlands’ 1997 book ‘Questioning Empowerment’ (although she claims not to have invented it) and used by everyone ever since.
When you disaggregate power in this way, you come up with an interesting finding:
‘[quantitative] M&E can capture some evidence of increased ‘power-to’ in numbers of people trained in a skill or knowledge, or able to market their products in a new way, or mobile phones distributed to enable women traders to share knowledge. However, ‘power-within’ is a realm of empowerment which does not directly lend itself to being captured by quantitative M&E methods.’
What’s more, while obviously women worry a good deal about income and putting food on the table, soft data (eg on feelings and perceptions) best collected by qualitative methods such as in depth interviews:
‘Appear to be what the women value most in [these] projects. While this is probably a very obvious point for feminists working with women, it is noteworthy for practitioners who tend to focus on supporting ‘power-to’ through provision of material resources and other tangible changes.’
That certainly chimes with what I found when talking to women in Community Protection Committees in the DRC last month – the biggest personal impact of their participation was the palpable sense of pride and self esteem that came from learning about their rights and how to exercise them, and then passing that knowledge on to their neighbours. Hard (though not impossible) to put a number on that. Listen to this interviewee from Bangladesh:
‘Before taking part in the project, I was not allowed to visit places outside my house. This all changed after I joined the producers’ group. My income and communication skills increased and improved. Due to the income and awareness, my husband allows me to attend different meetings of the producers’ group, village and district levels. Due to my involvement in the producers’ group, other producers encouraged me to run for a local government election as member in Union Parisad [UP – lowest tier of local government in Bangladesh]. I was motivated to try and finally was successful in winning the election. From a simple housewife, I am now an elected member of the UP.’
That last quote highlights another plus of qualitative methods – they really help communicate project impact (as do numbers, of course – maybe for different audiences). But Carter & co. want to move on from a crude ‘quant v qual’ dichotomy. They argue that quant and qual methods complement each other – and mixed methods can actually be the best way to tackle both the “did change happen” questions, as well as the why. For example, ‘Research methods associated with collecting qualitative data often actually reveal unexpected quantitative data, including changes in children’s school attendance, better nutrition, and so on.’
And how you do qualitative research matters. Yes, lots of qualitative research is pretty slapdash, but beware the temptation to ‘professionalize’ it and send for Rigorous Qual Consultants Inc: ‘If qualitative information is collected by an external agency with a clear, pre-determined mandate, there is the risk that much potentially interesting information is ignored as irrelevant to the task in hand, and not transmitted to the project staff.’ So NGOs may be better advised to build staff skills in qualitative research, rather than outsourcing.
In the end, they argue, ‘the best way to measure empowerment is to ask those directly concerned’. But it’s not quite as simple as that. Sure, it would be perverse in the extreme if we tried to measure empowerment by ignoring the nuance of voice and lived experience of those involved in order to generate another dry statistic. But equally, you just can’t rock up in a village and ask do you feel empowered?’ and expect to get a useful result.
There are clearly difficulties with putting all this into practice, namely that ‘‘value for money’ seems to require quantifiable facts’. But the authors think that’s no excuse. ‘Nevertheless we wonder if better communication on the part of development professionals about the worth of qualitative evidence in demonstrating value could mitigate this demand.’