Community Driven Development: Howard White and Radhika Menon respond to Scott Guggenheim

Howard White and Radhika Menon respond to Scott Guggenheim’s recent post on Community Driven Development

Evaluations have two functions: lesson learning and accountability. We believe that our report on community-driven development offers useful lessons for programme managers, practitioners and researchers. Despite posting a blog response to earlier comments, a critical backlash continues. This is disappointing – especially as our ‘critics’ use the arguments we make in the report to make arguments against us. We agree with virtually every point in Scott Guggenheim’s blog, though he writes as if we are at loggerheads. So, we would like highlight the important lessons, which are mostly areas of agreement, and set the record straight over perceived shortcomings in our report.

Our review is not a critique of CDD. And we do not say “CDD does not work”. We find that CDD has been enormously successful in delivering small scale infrastructure. We report figures on the thousands of facilities built which serve thousands of people in many countries. We do say that there is insufficient evidence on cost effectiveness. So, this is an area for further research. We do not make a recommendation to switch to local government procurement.

The positive effect on infrastructure has however not always resulted in improvements in higher order social outcomes. Meta-analysis of impact evaluation results show that CDD programmes have a weak effect on health outcomes and mostly insignificant effects on education and other welfare outcomes. While these are the overall findings, the report identifies and unpacks instances where there are positive effects.  For example, there was an increase in girls’ enrolment in Afghanistan despite education being a very small share of overall investments. We attribute the increase to changing gender norms supported by the National Solidarity Programme (NSP). Our explanations chime with what Scott has to say. But somehow his blog makes it seem like we are in disagreement with him.

Further on gender norms, the impact evaluation of the second phase of NSP in Afghanistan did find ‘mixed effects’ of the programme on gender norms. Men’s acceptance of women in leadership in local and national levels had increased, as had women’s participation in local governance. However, NSP did not lead to any change in men’s attitudes towards women’s economic and social participation or girls’ education. Scott’s blog reports only the first and not the second finding in arguing we are wrong to say the evidence is mixed.

But our statements in the report are anyway based on reviewing the evidence not from just one case, but from over twenty CDD programmes, where success in promoting inclusiveness is certainly mixed. We argue that programme designers can learn from where there has been more success, as in Indonesia.

Our examination of variations in impact on education outcomes is an example of how we analyse heterogeneity. Heterogeneity is the friend of meta-analysis, not its enemy. Meta-analysis allows us to explore variations in design and context, which explain programme performance. But we have been criticised for including a range of programmes, especially social funds. Such criticism fails to recognise how social funds evolved – those in Malawi and Zambia came to be run using a very CDD approach. An external agency may be involved in vetting that decision or deciding between competing proposals, that is one of the CDD design variations. But all programmes – not just social funds – imposed some limitations on the use of the funds.

Our review finds that CDD has no impact on social cohesion (see right). There is no heterogeneity there. The lack of effect on social cohesion is consistent across contexts. It is in building social cohesion that CDD has not worked. This is where meta-analysis is so useful, as it clearly illustrates the consistency of this finding. Scott’s blog concedes this point. So we are on the same page as far as these findings are concerned, which is not an impression you get when you read Scott’s blog.

As we say in the report, the lack of impact on social cohesion is not a new finding. Indeed, one of us was a co-author of the 2002 OED review of social funds – including the CDD-like Malawi and Zambia funds – which reported no impact on social capital, as did the OED CDD report three years later. The review confirms this finding now that we have additional evidence from high quality impact evaluations.

An issue for further research we did not flag and should have done is long run effects on governance of the longer run programmes. We do distinguish between those programmes which have a multi-year, multi-project investment in communities and the ‘single shot’ designs. It is plausible that the former, like the Kecamatan Development Programme in Indonesia and Kalahi-CIDDS in the Philippines would have a larger impact. But there is no evidence of this. We can say there may be a longer run impact, which further research could assess.

Reviews intend to be comprehensive, but have inclusion criteria. So, some studies people think should be included get excluded. What matters is that we use relevant evidence to analyse whether programmes have worked, and delve deeper into questions of what they have worked for and why they have worked or not worked. People who have worked on specific CDD programmes have both a richer viewpoint but also a more restricted one. Scott also presents research findings from contexts he is associated with. This does not mean that we discount what he has to say but it has to be put into the bigger picture.

Reviews do have their methodological limitations and evaluations can have different findings across contexts. The answers are not straightforward; they are often nuanced. Our plea to the development community would be to resist getting into ‘My study is right’ and ‘Your study is wrong’ debates and spend more time in constructive conversations about using evidence to inform programmes that can improve lives. Yes, we say CDD doesn’t build social cohesion. But we don’t say CDD doesn’t work – the answer depends on the outcome you are looking at. And there have been substantial variations in CDD’s effectiveness, so let’s learn from the variations to design better programmes.

For those interested, we also recommend reading the full report (rather than the brief which provides just a summary that some have reacted to).



Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our Privacy Policy.

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


2 Responses to “Community Driven Development: Howard White and Radhika Menon respond to Scott Guggenheim”
  1. David

    This has been a useful discussion – and an instructive one. The material in the 3ie report is of good quality and offers some useful synthesis, albeit with a few weaknesses that Scott points out in his response. I think the two main issues are the tone of the language and the imagined audience.

    The tone of the report, in the brief but also in the longer report, is about judgment. Sentences are declarative and definitive, saying things like “CDD programmes have little or no impact on social cohesion or governance” in response to an implicit question of “do CDD programmes accomplish X?” The report does not read as though it is attempting to determine what explains differences in CDD programme impact on social cohesion or on governance – a question of “what do CDD programmes do well/less well and why?” That may be in the authors’ minds, but it’s not how the resulting information is communicated. This also seems to drive a tendency to synthesize an array of programs into a “CDD assumes” set of assumptions, then test that assumption, rather than viewing the assumptions around CDD as a range whose variability might be an important aspect to explore. It becomes a question of assumption testing instead of curiosity about the validity of different assumptions under the same rubric, in order to simplify down the core model for testing.

    This seems to relate to an implicit audience of policy-makers who are making decisions about whether to authorize CDD investments, when I think a more meaningful audience would be practitioners who are seeking to improve programming portfolios that may include CDD. Rather than work to improve the approaches of practitioners, the policy brief is framed as aiming to tell practitioners’ bosses whether to approve of their work or not. Doing so both sets up a natural conflict with practitioners, by selecting an optic of judging them rather than helping them, and means that certain aspects of the practice (such as the appropriate comparators that Scott cited in his first response) are not well nuanced in the write-up – it does read as though written by relative newcomers to the design choices that inform CDD.

    Basically, I feel like Howard and Radhika did most of the hard work in gathering and synthesizing this information, and it could be a much stronger paper if it was written toward the right audience and purpose – stimulating improvement in programming by designers and practitioners, as part of their existing discourse, rather than sitting outside of that discourse in judgment for an audience of authorizing policy-makers. Contributing to the science of delivery, rather than being the definitive expert take on “what works,” is a more useful way to think about this research, and I’d encourage 3ie to shift mental models in that direction for future work.

  2. Duncan Campbell

    Maybe we should be doing CDD because it’s the only approach that respects people? Anything else is colonial, and I think the “meta-analysis” has pretty much trashed the efficacy of colonialism … unless, of course, you’re the coloniser. How about asking the victims of colonisation how much they value being respected, and then re-do the “cost-benefit” analysis of CDD. That might change the conclusions …

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.