Randomized Controlled Trials (RCTs) are all the rage among development wonks at the moment. Imported from medical research, they offer the tantalizing allure to social scientists of finally overcoming the Achilles’ heel of real-world research – the counterfactual (aka ‘how do we know what would have happened if we hadn’t lobbied the government/ employed the teachers/ built the road etc?). Here, one of the RCT gurus (and recent winner of the almost-as-good-as-a-Nobel John Bates Clark medal for young economists), Esther Duflo, sets out the case for the technique [h/t Chris Blattman]. She is the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics in the Department of Economics at MIT and a founder and director of the Jameel Poverty Action Lab (J-PAL) (must take a look at her business card sometime).
Not everyone is convinced – in a recent debate in the Enterprise Development and Microfinance journal James Copestake (anti) slugged it out with Dean Karlan and Nathanael Goldberg (pro). Copestake criticizes RCTs on four grounds:
“Problem selection bias. I am worried by reports of bright young “randomistas” narrowing the research agenda by selecting issues for research to fit their preferred tool, rather than finding the best tool to fit the most important issues. For example, use of RCTs to test product design changes should not divert attention from other influences on impact that may be harder to randomise: geographical targeting methods and organisational culture, for example.
External validity. RCTs require a fixed investment and generate evidence at the end of a discrete period of time, rather than continuously. This accentuates the difficulty of choosing which few among many possible ‘treatments’ should be studied, where and when. The value of findings then depends upon their transferability.
Cost effectiveness. I’m very much in favour of experimentation and testing, but remain to be convinced that RCTs are necessarily the most cost-effective way for managers and policy makers operating in complex, diverse and uncertain contexts to evaluate them, compared to triangulating routine monitoring data against focus group discussions and individual satisfaction surveys, for example.
Fourth, there may well be other more technical problems with RC studies. For example, it will not always be possible to ensure that treatment and control groups are not contaminated through spillover effects between them: response to not having a treatment being affected by my knowledge that others are having it, for example.”
If you want to read Karlan and Goldberg’s replies to these criticisms to make your own mind up who’s right, I’m afraid you have to pay (or go to the library). Anyone know of an ungated version? Or read the wikipedia entry linked to at the top of this post, which I thought gave a rather good summary of the pros and cons.