The political implications of evidence-based approaches (aka start of this week’s wonkwar on the results agenda)

January 23, 2013

Launch of ‘If’ – new megacampaign to tackle global hunger: how does it compare with ‘Make Poverty History’?

January 23, 2013

The evidence debate continues: Chris Whitty and Stefan Dercon respond from DFID

January 23, 2013
empty image
empty image

whitty_christopherYesterday Chris Roche and Rosalind Eyben set out their concerns over the results agenda. Today Chris Whitty (left), DFID’s Director of Research and Evidence and Dercon, StefanChief Scientific Adviser and Stefan Dercon (right), its Chief Economist, respond.

It is common ground that “No-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions.” Neither would anyone argue that power, politics and ideology are not central to policy and indeed day-to-day decisions. Much of the rest of yesterday’s passionate blog by Rosalind Eyben and Chris Roche sets up a series of straw men, presenting a supposed case for evidence-based approaches that is far removed from reality and in places borders on the sinister, with its implication that this is some coming together of scientists in laboratories experimenting on Africans, 1930s colonialism, and money-pinching government truth-junkies. Whilst this may work as polemic, the logical and factual base of the blog is less strong.

Rosalind and Chris start with evidence-based medicine, so let’s start in the same place. One of us (CW) started training as the last senior doctors to oppose evidence-based medicine were nearing retirement. ‘My boy’ they would say, generally with a slightly patronising pat on the arm, ‘this evidence-based medicine fad won’t last. Every patient is different, every family situation is unique; how can you generalise from a mass of data to the complexity of the human situation.” Fortunately they lost that argument. As evidence-informed approaches supplanted expert opinion the likelihood of dying from a heart attack dropped by 40% over 10 years, and the research tools which achieved this (of which randomised trials are only one) are now being used to address the problems of health and poverty in Africa and Asia.

The consequences of moving from expert (ie opinion-based, seniority-based and anecdote-based) to evidence-based healthcare policy, far from being some sinister neocolonial experiment, have been spectacular. To quote a recent Economist headline, ‘Africa is currently experiencing some of the fastest falls inOxfam africa campaign childhood mortality ever seen, anywhere’. It is a great example of the positive side to modern Africa the current excellent Oxfam publicity campaign (right) is all about. This success is based on many small bits of evidence, from many disciplines, leading to multiple incrementally better interventions. Critically, it also involves stopping doing things which the expert consensus agreed should work, but which when tested do not. It is no accident that one of the most evidence-based parts of development is also one where development efforts have had some of their greatest successes.

Proper evidence empowers the decision-maker to be able to make better choices. This is a good thing. In every discipline, in every country, where rigorous testing of the solutions of experts has started, many ways of doing things promoted by serious and intelligent people with years of experience have been shown not to work. International development is no different, except that the communities we seek to assist are more vulnerable, including to our bad choices.

Much of what we all do in international development has very limited evidence that it does any good  (in this it is no different from many other policy areas) – which is not the same as saying it is pointless. Rather we don’t know what is pointless. Some of our actions will work better than we think, much of it will work much less well than we hope, and some of it will be damaging the poorest without us realising it. In the evidence-light areas we just don’t know which are which.

We must have the humility to accept that we are all often wrong, however reflexive the practitioner, however deep their reading and experience and passion to do good. Evidence-based approaches are not about imposing a particular theory or view of the world. It is simply about taking any opportunity to test our own solutions in the best way available, using evidence honestly when it is available to inform (note the word) decisions, and when the facts change, changing our minds.

This honesty includes saying to decision-makers when evidence is methodologically weak, mixed or missing so they know they are on their own, unable to rely on (or make a claim on) the evidence. The worst possible solution, which we know Chris and Ros would also deplore, is using the social power of the ‘expert’ to imply we know the answer when we actually have no solid evidential basis for our opinion or prejudice.

A few false assumptions about evidence-based decision making

Some of those who express unease about evidence-based policy and practice seem to assume that it is always based on randomised trials and quantitative methodologies: not so. Methods from all disciplines, qualitative and quantitative, are needed, with the mix depending on the context. Randomised trials are one tool amongst very many, although a good one in the right setting. The argument that evidence-based approaches can “only apply in cases of individual treatment and not the wider community level” ignores over 30 years of methodology which has done exactly that, with very convincing results.

A sterile argument  between people who are on the one side believe that a  randomised trial can answer any question (they can’t), and people who do not appear to be aware of any  methodological advances since the 1970s except in their own narrow field is a depressingly familiar experience. We know this does not apply to Rosalind and Chris, but listening to people passionately critiquing methodologies they have not taken the trouble to understand does no good to anyone. This applies both to a randomista who seems to believe that all there is to social research is a few focus groups and in-depth interviews, and to people from a more qualitative social science background who would have trouble explaining the difference between cluster randomised and step-wedge design but assume both are irrelevant to social research anyway (both can be used to measure societal rather than individual effects).

It is tempting to take every point the authors make where we have concerns about their factual basis and logical framework but we will take just three.

“Evidence-based approaches are pre-occupied with avoiding bias and increasing the precision of estimates of effect”. On less bias – generally true. Please complete the sentence ‘More biased research is better because…’. On precision – no, incorrect, the range of situations where a more precise answer is a better answer is small.

One statement we would like to address head-on starts “Evidence-based approaches became linked to value for money concerns to deliver ‘results’…”. We agree- and this is a good thing. Doing a pointless thing, professionally delivered and passionately believed in, is always going to be poor value for money. Testing what works and what does not therefore is essential to value for money. More importantly, doing pointless things diverts very limited human and financial resources, in an ocean of need, away from those who could best use them- not what any of us are in international development to do.

Is it “technical approaches” on the one hand, and “power, political economy” analysis on the other?

Rosalind and Chris’ key criticism is that evidence-based approaches “deflect attention from the centrality of power [and] politics […] in shaping society”, and they offer “power analyses” as an apparent alternative to assessing rigorously what works. This creates a false dichotomy, as if a choice has to be made between a “technical, rational and scientific approach to development” and an approach that recognises politics and the role of power. It is easy rhetoric, but troubling and, if taken much further, even dangerous. Understanding power and politics and how to assist in social change also require rural indiacareful and rigorous evidence, and again, results are not simply what experts would have expected a priori. Recent studies on the positive impacts of female leadership quotas in rural India are for many of us rather surprisingly good news, even if one can fairly worry about its applicability in other settings, while the struggle to find systematically a positive impact of decentralisation and community-driven development programmes is important to internalise in our actions for change, and highlights the importance of understanding contexts and politics. In these cases, it is not a matter of just RCTs, but of rigour, and of combining appropriate methods, including more qualitative and political economy analysis.

Strong analysis of politics and power without offering much in terms of what can be acted upon is similarly unhelpful. They criticise an evidence-focused agenda by stating that “to act ‘technically’ in a politically complex context can make external actors pawns of more powerful vested interests and therefore by default makes them, albeit unintentionally, political actors.” But all actions by external actors will interact with political forces and vested interests. In many of the settings where development actors want to make a difference, power and political institutions are biased against the poor. Being able to act on strong evidence of what works in constrained political settings is crucial.

A reductionist and misinformed view of evidence as purely ‘technical’ or as being only about “what works” is unhelpful – it is also about generating evidence and understanding (and learning) on why interventions and approaches may work, including understanding the social, political, and economic factors that may enable or constrain success of different approaches. Far from the search for evidence pushing us in a ‘technical’, apolitical direction it has reinforced the importance of understanding and trying to tackle the underlying causes of poverty and conflict. There is agreement on the importance of politics and institutions in shaping growth, security and human development. However, the ability of external actors to influence institutions is much less clear and this is where DFID research is now focussed. Ros and Chris have misread the context – the commitment to evidence has opened up the space fundamentally to challenge conventional, technical approaches to aid.

Why it matters for international development

There are large areas of international development where decision-makers are largely flying blind – forced to make decisions purely on gut feeling and ideology not because they wish to because they have no option. Try making difficult decisions in education policy compared to health policy and the difference in usable evidence is dramatic – yet both are complex, social and context-dependent parts of human life. It is always puzzling when people say airily ‘health is easy’- it is not, and is an intensely political and social subject requiring interventions at societal level.

Today we can eradicate rinderpest in cattle and build bridges over the Zambezi based on rock-solid evidence from many disciplines, but do not have anywhere near as clear an idea how to reduce violence against women or tackle police corruption. All are great challenges with social dimensions but in two of them people have set about finding and testing solutions in a systematic way over many decades.

Having robustly tested evidence-based solutions certainly does not eliminate politics: the decision whether to build a bridge, what sort and where, is an intensely political choice – but at least those making the choice now have a fair assumption it will stand up- based on hundreds of years of incremental evidence. The evidence-barren areas in development are a collective, and in our view shameful, failure by us all in the academic and practitioner community. We should never excuse them with the feeble assertion that it is too difficult or complicated. Development is difficult and complicated – but the bases for making decisions will gradually improve if we are serious about improving it.

In conclusion, we collectively have the capacity to be able to give to our successors in every continent a far better basis on which to makeevidence based change placard their decisions for their lives than our generation have. To imply it is not worth trying to provide the best and most rigorous evidence to those who need to make difficult decisions because they will have other influences as well is like saying to someone going for a walk in dangerous mountains that they do not need a map because there will be many other factors that will determine where they go. That is true – but they are still less likely to fall off the cliff if they have one.

Where evidence is clear-cut we should be making that plain to decision makers – and where it is not we should say that as well, be honest about what is there and try to get better evidence for the future. That, in essence, is what evidence-based decision making is about – and all it is about. If the academic community is serious about trying to assist those working in the field (including in Oxfam), and above all empowering the most vulnerable communities to make the most informed possible decisions available for their own development, we should be putting our greatest efforts into supporting decision-makers to use the best evidence, and finding better methodologies in areas where we currently have very weak evidence. There are many, and this should be tackled as a matter of urgency.

Tomorrow, Chris Roche and Rosalind Eyben respond


  1. Why is it that when someone mentions the evil, inhumane, indefensible practices of 1930s colonialism and their current relevence, they are accused of being sinister? That really takes the biscuit!

    Would you accuse someone who cited the German concentration camps when discussing modern European antisemitism as being sinister?

    I think that one sentence alone has exploded any semblence of credibility that your argument might have possessed.

  2. This is an excellent response to what I thought was a weak starting point from Rosalind Eyben and Chris Roche yesterday (I hope they respond more strongly tomorrow). It seems the key point of argument is whether the focus on measurable evidence deflects attention from power and politics or, as you suggest, “has opened up the space fundamentally to challenge conventional, technical approaches to aid”. Can you give any examples of this? Is the failure to find positive impacts of the CDD and decentralisation projects you mention one instance? If so, what has subsequently opened up in terms of political debate?

  3. Excellent post, with an appropriate underlying tone of quiet anger at how evidence-based policy-making is misinterpreted.

    The biggest contribution that the randomistas and their comrades have given us is to highlight just how difficult it is to achieve any kind of meaningful progress in development. Even assuming that a given project is implemented well (and they are often not, for good reasons and bad) the process of supporting human development in poor countries is downright difficult, and much of our work won’t leave much of a positive impression after the project finishes. Why should we be surprised by this? It doesn’t mean we shouldn’t try.

    Attempting to systematically find out which kinds of project have a better track record of success than others is an obligation for development workers that we should all feel proud to be a part of. I would argue it is a moral imperative. Epistemological nit-picking is a luxury activity.

    What Chris and Stefan do not mention are the perverse effects that a “results agenda” can have in a civil service bureaucracy, where the message maybe gets a bit mangled on the way down. Oxfam receives a lot money from DFID. DFID needs to support Oxfam’s initiatives to systematically examine its own effectiveness and learn from it (which Karl Hughes has posted on here in the past), rather than asking NGOs to count and report outputs (“bums on seats”) for the sole purpose of said numbers being reeled off during a minister’s grilling in parliament.

  4. One of the key issues less addressed in the response by Chris and Stefan is the problem of “objectivity in social sciences”. Once we acknowledge that all the scientifically observed reality, established facts, and causal connections are based on a specific set of value premises(“the valuation in Myrdal’s terminology), the discussion should be focusing on whether different theories or approaches, say Chris and Ros vs. Chris and Stefan, are coherent and consisten in terms of their value premises, selection of problems, and “evidences” etc. etc. in its own framework. For instance, both parties seem to be in agreement that interventions should be based on solid research findings, be they “scientific evidences” or results of “power and political analysis”. Saying that a pursuit for “scientific evidences” does not exclude the evidences on power and politics, Chris and Stefan did in their response, does not address the questioning the value premises on which their “evidence based approach” is based.
    Development is complicated and complex research field not only because it is multidimensional, but also it involves inherent ideological and political contestation. Chris and Ros, I think, seemed to point out epistemological and political nature of the “evidence-based approach” and its implications in politics, policies and the developmental outcomes. Look forward to seeing a response from Chris and Ros in this extremely interesting debate.

  5. No comments yet today – perhaps because everyone’s intimidated by the heavy hitters on both sides of this debate – and so have kept to the safer ground of retweeting the link. As the Director of a DFID funded research programme I was tempted to do the same but felt I should stick my head above the parapet. We’ve certainly been challenged by DFID since starting the SLRC ( ) programme to think hard about the quality of existing evidence on how services are delivered and livelihoods supported in fragile and conflict affected places. And we’ve been encouraged to focus on the rigour of our methodology in addressing areas where the evidence is weak. We haven’t been pushed in the direction of randomised trials and aren’t using them partly because we couldn’t afford to and partly because it isn’t in the skill sets of the researchers in our consortium. We have given a lot of thought to the rigour of the methods we are using in the SLRC research.

    So am I one of the people ‘critiquing methodologies they have not taken the trouble to understand’? I’m not – I’m happy to admit my lack of understanding of randomised trials but do recognise that it means I can’t criticise. So if all DFID are calling for is greater attention on the part of researchers they fund to the rigour of their methods and recognise the need for a multiplicity of methods is there a problem? I’m not sure if there is and perhaps only time will tell whether or not the unease of many working in development research is justified. I think where there is legitimate room for further debate is over how different types of evidence, disciplines and methodologies are being valued by DFID as they focus on evidence based policy. There is, after all, a long history of the dominance of economics within development studies. The DFID website notes that the Research and Evidence Division of DFID is ‘synthesising evidence from all sources, assessing it for its quality and disseminating it to policymakers and practitioner’. Greater clarity and transparency about the basis on which assessments of quality are taking place would help to shed some light on this debate.

  6. While the arguments in the debate so far mainly either discredit or justify the results agenda, I think they are located completely different levels. The points of critique focus on the partly absurd effects of the current way the results agenda is implemented while the proponents run a basic argument to whether we want to see if our interventions are effective or not. I really think the discussion should be much less around whether we want to see results (of course we do) and much more around how we can obtain these results without the adverse effects that are portrait on the one hand in yesterday’s post but also by many practitioners in our recent series of discussions in the so called ‘Systemic M&E Initiative’ by the SEEP Network.

    Due to my engagement in this initiative I have been discussing quite a bit with practitioners on monitoring and results measurement and how to make monitoring systems more systemic. For me this bottom up perspective is extremely revealing in how conscious these practitioners are about the complexities of the systems they work in and how they intuitively come up with solutions that are in line with what we could propose based on complexity theory and systems thinking. Nevertheless, practitioners are still strongly entangled in the overly formalistic and data-driven mindset of the results agenda (and there I agree with some of the points in yesterday’s post) that is based on a mechanistic view with clear cause-and-effect relationships and a bias for objectively obtained data that is stripped from its context and by that rendered largely meaningless for improving implementation.

    In order, however, to make the results agenda meaningful we have to embrace a new paradigm that is opened to us by the research of complex systems. This would help us to make a Big Push Forward to more effective and meaningful development work.

  7. I’m enjoying the discussion. I found the Niger child mortality case an interesting example to ground what seems to be an epistemological difference: a letter in response by Rasanathan et al in a recent Lancet suggests that building on these improvements will only be possible if they address social determinants:


    For the reforms to be sustainable requires social change that takes and keeps children out of poverty – or as the Big Push Forward would have it, transformational development – and management processes that can adapt to and support such change. It requires a reframing of the problem, and a different type of evidence – which is the main point I took from Eyben and Roche’s post, and which this post seems to agree on.

    It also brought up an interesting question for me about how evidence-based management or decision-making manifests itself in the complex management of a multi-stakeholder programme, and in particular if anyone has further information on the role of evidence-based management in the Niger case? I couldn’t actually see from the Lancet article cited that the great improvements were linked to evidence-linked decisions (beyond existing knowledge on the kind of health services required to deliver such improvements). On their last page the authors seem to suggest that there were no evaluation systems giving “prospective recording the policy environment or the strength of implementation”, and the improvements could be simply a function of the ramped up donor funding the article cites. The article only touches on the issue – would be interesting to see if anyone knows more about the role of evidence-based management in this case.

  8. Also enjoying the discussion – this is a strong, clearly argued post & I find it hard to disagree with the basic line of argument. I am a bit puzzled at the apparent misrepresentation of Mansuri and Rao’s work on CDD/decentralisation though. As I recall it basically says that when well-designed and linked up with the right institutional framework higher up outcomes are generally good. And important to bear in mind that they only looked at World Bank projects…. All of which goes to show that you have to be a little careful when representing the evidence…..

  9. Excellent discussion. Let’s hope it does not stop with this ‘online debate’.

    I think the problem lies deep in the confusion that exists between ‘development’ the industry and ‘development’ the process.

    When I started working as a researcher in a think tank in Peru (on economic, trade, children and disability policies for example; as well as on civil society and the third sector) I did so assuming that I was working on economic, trade, child, etc. policy.

    Only when I arrived to the UK to study a postgraduate degree did I discover that all that was called ‘development policy’.

    There is a big difference between the two. Development is a mainstream political process (that all countries rich and poor go through). But when the industry talks of development it is really talking about its own world, of marginal politics in the ‘north’ and technocratic interventions in the ‘south’.

    In the UK, health policy is decided by a great many number of factors or appeals (evidence, sure, but also values, tradition, biases, political calculations, etc.). We may complain about it but we accept that it is a system that works. But health policy for Malawi (or other heavily Aid dependent countries) is decided mainly by evidence (or what often passes as evidence at the time) and usually by foreign experts, albeit with the acknowledgement of the importance of politics (which is not the same as participating in politics).

    A reform in the health sector in the UK is a heavily contested issue. It is the subject of studies from academics, think tanks (from all sides of the political spectrum), corporations, specialised journalists (and the tabloids), NGOs, and the general public. This is why the NHS reform cannot ever happen within a year or within the confines of a LogFramed project.

    And this is good. It means that politics are serving their purpose of letting us make the most appropriate decisions for our society, today.

    But in the Aid sector, in the development industry, this is not the case. User fees were imposed and removed based on evidence (at the time). Aid agencies introduce ‘evidence based’ sexual reproductive health solutions to countries where the public debates on these issues are not even happening yet. The industry tests (because most RCTs are tests of theories) the introduction of new institutions (cash transfers or other forms of market incentives, for example) into societies with little regard for the long term effects these may have on them beyond the immediate objectives of the interventions (have a look at Michael Sandel’s excellent book on markets).

    This is what is, i think, at the heart of the problem. We get confused with ‘based’ and ‘informed’. The scientific method can tell us what seems to work (with a degree of probability) or has worked. It can tell us what is and is not. It can present options to solve a problem and give us some sense of their likelihood of success. But it cannot tell us what to do.

    What to do (what criteria to value more over others, for instance) is a matter of values.

    So the key question is who has the right (and the responsibility) to use the evidence?

    One way to answer this question is to ask ourselves (in the UK) if we (as a whole) would be happy with USAID funding a large evidence based campaign to reform the NHS or our education policy? What would the French say if we sent out NGOs or contracted transnational corporations like KPMG or PWC to run electoral reform programmes there?

    Chis Whitty and Stefan Dercon are write in that the choice is not between evidence and politics. But what they do not say is that this is not their choice to make.

    Chris and Stefan, after all, are not accountable to the billions of people around the world who are affected by the policies they (DFID) and their subcontractors (NGOs, think tanks and consultancies) design and implement. Their heads won’t roll; their jobs will still be there; and mistakes will be turned into lessons learned.

    What DFID ought to be doing, if it truly believes that evidence and politics can coexist (and of course they can) is to focus on building the capacity (no workshops please! Focus on tertiary education) of the countries it seeks to help.

    It will take time and it will involve giving up on that ridiculous career of ‘development studies’. But the economists, social scientists, political scientists, doctors, engineers, mathematicians, philosophers, historians, etc. that a proper investment in education produces will be, eventually, capable of balancing evidence and politics and making the most appropriate choices for themselves.

    DFID (and other agencies) should be more concerned about the quality of the public debate rather than the immediate adoption of new policies or the fantasy that it is possible to make poverty (or hunger) history.
    DFID Zambia, for example, should be praised for its bet that strengthening economic policy debate is a more adequate objective than achieving policy change (even if it is evidence based). If successful, they will help build the long term capacity to make informed decisions in the future.

  10. What particularly troubles me about this debate is that it buries a crucial point.

    Chris Whitty and Stefan Dercon criticise the false dichotomy created through juxtaposing “evidence-based approaches” (here epitomised as based on randomised trials) with “political approaches”. Effectively, the two sides are in disagreement over whether evidence or politics drive policy. The false dichotomy in the evidence debate is indeed kept alive in both research and practice.

    The crucial overlooked point in the framing of the debate is that both approaches ought to be about empirical data. Understanding politics and power is impossible without solid information. The information required, however, is of a very different kind than what is needed for the infrastructure projects listed by Chris Whitty and Stefan Dercon as an example of good evidence-based programmes.

    At this point comes a full disclosure: like Paul Harvey below, I don’t know much about randomised trials. Like him, I am also directing a DfID-funded research consortium, which looks specifically at power and politics in situations of violent conflict (The Justice and Security Research Programme, JSRP).

    What we have found in doing extensive reviews on the literature on, for example, security or conflict resolution is that rich empirical data is rather sparse on either end of the false dichotomy. With rich empirical data I mean the kind of data that draws on multiple methods and perspectives gathered over a longer period of time and that has been critically scrutinized by various sets of eyes.

    Instead, qualitative empirical data on processes of power and politics often falls short of allowing profound and long-term insights that shift programmatic thinking. Take the field of conflict resolution, for example. Much of the published research on the topic starts from either quite a narrow definition of a particular conflict or with the disclaimer that a conflict is very complex, implying that the complexity cannot not be adequately captured.

    Conflict resolution is often conceptualised as a journey from a to b, in which particularly reaching the end point provides measure of success or failure. The multitude of experiences that happen on the way tend to be covered in quite a reductive way, including how a conflict resolution process becomes a further stage of a conflict and thus self-defeating. Of course, this is were true complexity lies and where rich empirical data might at least help to shed light. I would argue that how those caught up in a conflict experience the path to resolution decides whether a conflict ends.

    This is truly a complex journey. Not much research of any method seriously engages with this. Broadly speaking, many development programmes identify a procedural starting point for their plan (which might or might not be based on research), but then fall short of continuously interrogating how the programme’s presence continues to shape the very politics and power in which our research programme is so interested.

    In short, such processes can certainly not be understood through randomised trials, but they are currently also not sufficiently researched with other methods. There are exceptions, of course.

    Crucially, this ongoing debate about evidence, power and politics to me has brought into focus the task we as researchers face. At the JSRP, we are working on finding ways to record and present such complexity, with a particular focus on how the true experts—those who are caught up in power, politics, conflict and development programmes—experience these processes. The experience of the supposed end-users of policies and programming is often recorded in a too casual way, yet in understanding both the quality and quantity of the experience lies, we think, a way to policies that address the challenges of living in difficult situations in a better way. From a research perspective, that would be a result.

  11. Duncan promised a wonk-off, and he has delivered :)

    The debate is very interesting, but i feel that what it is missing is the reality of modality, or ‘getting the money out’. To take two of the points raised in this and yesterday’s posts, which are related, as an example: evidence-informed decision-making, and VFM.

    The reality of many interventions – this applies to DFID, but equally to (m)any other donor too – is that programmes are based on the 3-5 year ‘projectible change’ model: here’s a problem, here’s an amount of money, here’s what we want to see, go to work. Certainly there are moves to try to ameliorate this, via theory of change etc, but how far these are being effectively used, or rather are becoming ‘logframes on speed’ (to reference an earlier post by Duncan) is up for debate.

    What this then leads to in VFM terms is the need to very quickly find data that supports both the initial assertion and the need of the implementing agency that it is succeeding. The result is poor use of evidence, leading to poorly informed decision making (with notable exceptions of course!)

    So what is a possible alternative? Well at present i’m somewhat seduced by the simplicity within complexity, or ‘failing forward’ as some have it. So: here’s an issue, what is happening, how can we support this, how’s that going, how can we do that differently or better?

    Perhaps some can/will protest that this is exactly what they ARE doing, and i’d be very interested to learn more. But the well-known challenges to this approach, especially at donor level are that it requires flexible time and funding, both of which are hard to sell, even if it may represent a more positive argument in VFM terms!

    The political nature of our work looms large not just in countries where we work but to these arguments too. To echo today’s post, this is not to say we’re condemned to the pointless, but more that this is the challenge we need to embrace, to be better at showing what we can and do achieve, and keep trying to do it better.

  12. Thanks Jake,

    You are raising the important point about if/how evidence can actually be applied within political contexts of organisational decision making and how difficult this in reality. I have enormous sympathy for DFID staff and their partners tasked with the laborious task of coming up with appraisal tools and convincing business cases that supposedly use evidence to inform funding decisions. Often seems like we are in a policy based evidence paradigm rather than an evidence based policy one! Suspect this issue will get more coverage today!

  13. Enrique, thanks so much for this comment. I agree whole-heartedly that DFID (and other agencies) should be more concerned about the quality of the public debate rather than the immediate adoption of new policies. Some years ago I tried to make a similar argument to a House of Commons enquiry on aid effectiveness. They concurred with my description of the political messiness of policy making in the UK and were rather stunned when I asked why they thought it might be any different in an aid recipient country. But when I read the report from the enquiry, I found this point was not included. Aidland is another country (see Raymond Apthorpe’s chapter in Adventures in Aidland). DFID staff are under enormous pressure to put on a performance of doing things differently there than in our own world. See, for example, last year’s Institute of Fiscal Studies report on the Government’s budget ( “Because the spending occurs elsewhere in the world, there is a relative lack of public [meaning the UK public] scrutiny of the budget’s effectiveness – voters can’t experience the effectiveness of aid spending in the way they can experience their local school, hospital or police force. This argues for an even greater degree of transparency and clarity about spending decisions and effectiveness than is seen in the rest of public spending”. This kind of pressure forces DFID into trying (or at least pretending) to be in control of the policy environment – hence the utility of evidence-based discourse about ‘testing what works and what does not [as] essential to value for money’.

  14. There seem to be two main lines of critique against the norms that have accompanied the “evidence-based revolution” in aid:
    1) The approach tends, by presenting development as objectively knowable if broken down into discrete and small bits, to drive attention toward small, more easily measurable interventions to test, particular those that are suited to situations that are simple or complicated rather than complex.

    2) The approach tends, by presenting development as an objective, scientifically-verifiable measure, to privilege Western scientists over local experiences and to lead to expectations that development can be achieved through scientific progress without messy political interactions.

    This response somewhat adequately deals with the second question, in that it acknowledges the importance of power and politics, though as noted in other comments, it still treats the aid provider as an objective outsider to the (static) system being studied.

    But my main problem with this response is that it fails to deal with the first objection: that the current processes around evidence-based results fails to grapple with complex systems, interaction effects, and emergent properties that dominate most aid project landscapes.

    A fundamental critique of the evidence-based revolution is that it actually diminishes efforts to get rigorous evidence about addressing complex challenges. We all want evidence, it’s a question of whether the current framing of “evidence-based” is distorting what types of evidence we gather and value. For those who think that the current emphases on methods to test what works are distorting how we value the evidence coming in (RCT=gold, qualitative methods=junk), this offers little other than platitudes about lots of other methods existing.

    Personally, I would be a bigger proponent of the evidence-based revolution if it seemed that it was coming to folks interested in power, politics, and development, and asking them what their questions are and what evidence might contribute to their work. Absent a learning agenda set to fit complex space and concern itself with power, it will continue to seem to me to be an instance of methods leading research – or searching for keys under the light rather than inventing a flashlight.

  15. All would agree that good data and knowledge are important for good policy making. In addition we could all agree that it is important to understand implications and outcome of policy interventions. The question, it seems, is about how best to achieve these ends.

    There are at least two central issues: a) to recognize strengths and weakness of various methods and consider in what ways they help policy analysts and policy makers to engage with the realities of everyday challenges in developing countries; b) not to ignore the fact that methods will not produce results independent of the way they are deployed to answer particular questions. If we are interested in ‘proper evidence’ one needs to answer the following questions: who are ‘we’? What is the definition of ‘properness’? and whose interests a given ‘properness’ definition will satisfy.

    a) As Whitty and Dercon argue it is useful to know something is good enough to fund- good value for money. How do we assess this? What kind of research needs to do this assessment in social contexts in which policies are implemented? Medicine is commonly used in these debates to show how a precise method would yield good applicable results. However, the medical analogy hides important differences. International development policies themselves are not medical treatments and the efficacy of these policies cannot be assessed as if they are treatments by methods that assess how people react to ‘our’ interventions. These assessments don’t say much about the context of people’s own lives and how they choose to live and behave when we are not testing ‘our’ policies on them as if policies are pills. Social contexts are complex, they react to interventions (including research) and these reactions and social contexts change overtime. This becomes more complicated with questions of construct validity, internal validity and external validity issues. When the attempt is to test ‘if something or what works’, it is not a simple case of observing X leads to Y. These tests will have to explain: a) why what is being tested by ‘us’ is relevant for the context (otherwise we are using people to test our ideas for our purposes); b) ‘why’ something works or not: what are the conditions under which an effect is emerging. Things emerge as a result of many causal interactions independent of what we can see and know, is there a way to know and consider these (apart from using established statistical tools to deal with unknown confounders); c) what is in our methods which will allow to think what worked here might also work somewhere else. Finally what is in this research and method which tell us why what we observed as a relationship between X and Y would hold over time? How do we assess the change that will be due to a policy intervention based on limited evidence?

    b) Whitty and Dercon in their argument talk about use of evidence in various ways. It is sometimes about extracting the right information (but is that, on its own, enough for evidential claim?) and at other times it is about the assessment of the impact of particular interventions (what are the implications of giving free bed-nets over selling them, would micro-credit to women reduce domestic violence), or to evaluate efficacy of policies (that we have implemented) so that they can be scaled up or stopped (one hopes!). Furthermore, they suggest ‘Being able to act on strong evidence of what works in constrained political settings is crucial’- we seem to get another purpose underlying this debate: evidence as an attempt to present objective facts to leverage others to act on what we want them to do. It is only the last point which reveals directly what is implicit in others and obscured by the particular method discussions that evidence is used by an actor for particular purpose. It is therefore important not to lose the sight of socio-political interests and ideological positions that frame the evidence practice first by setting the kind of question that matters in a given field. This orientation then informs the kinds of methods that are supposed to produce the right evidence! Evidence is always for some purpose (evidence for something) and by some actor.

    Certain methods are prioritised over others. One cannot ignore the fact that in social sciences in recent years evidence-based policy debate is very much emerged in the juncture of RCTs as method and their use in economics, in particular in development economics. This is an important debate but it is a disciplinary debate in which economists are trying to move towards more empiricist model of enquiry. Whitty and Dercon are pointing out that there are many methods other than RCTs that can produce evidence. This is absolutely right. However, it is hard to have this debate on methods and evidence-based policy in abstract. If it is indeed the case that we could use multiple methods for evidence production to be used for policy purposes, we need to have a discussion on how an organization like DFID uses evidence based on different methods. Ethnographically-based research on DFID’s knowledge and evidence-based policy practices would be very helpful to understand the broader context of decision-making processes and the position of evidence/knowledge in those processes. How do policy makers chose between competing evidence? What determines their choices and the kind of evidence made available to the policy makers? What else they use to decide given that evidence is not the only basis for decisions? Before I am told that DFID has its own research division working on these issues, I like to say that I think research by external researchers would also help.

  16. This is all very interesting and a great debate. I think an important distinction is worth making between evidence BASED policy and evidence INFORMED policy.
    A policy development process that has been informed by evidence (for example by due and competent consideration of a range of appropriate sources of evidence and effective public debate of that evidence) is what I assume we would all like to see.
    But that does not mean that the end policy enacted will be evidence based. The process can reasonably be considered successful and functional even if the end result is the choice of policy makers to reject some or all of the evidence (for whatever reason but let’s just suggest political considerations for now) and that, in my view at least, is acceptable. That would be evidence informed policy.
    As is mentioned above, this means that priority should be given to ensuring effective public debate and by extension, effective capacity of policy making stakeholders to be able to demand, access, evaluate and use evidence (which includes rejecting). That is often a significant undertaking and workshops are not enough. It’s a significant and often structural challenge.
    Personally, I think that a critical issue in all this is not just looking at policy makers and their support structures but that this capacity also needs to exist in people and organisations that are in opposition to incumbents and current centres of power. Building oppositional capacity will help drive more effective use of evidence by those in power. Of course not many development funders are keen on funding such potentially politicised capacity development but it is needed.

  17. Re “Methods from all disciplines, qualitative and quantitative, are needed, with the mix depending on the context” can I put in a recommendation for this recent publication by Goertz and Mahoney, Sept (2012) “A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences”. The authors manage to present a very useful account of how qualitative methods can be used rigorously, without in the process demeaning/diminishing quantitative methods. It is one of those uncommon books where more than one reading is well worthwhile. Practically useful as well as epistemologically interesting!

  18. With respect to Hakan’s comment “Ethnographically-based research on DFID’s knowledge and evidence-based policy practices would be very helpful to understand the broader context of decision-making processes and the position of evidence/knowledge in those processes”. It would be good to know Chris Whitty’s response to this excellent suggestion.

  19. I really enjoyed both posts and the discussions above.

    What we’re essentially debating is a paradigm war – paradigm wars aren’t resolved over night.

    None of us can ever remove ‘bias’ from research/interventions altogether – positivist models assert objectivity of ‘hard’ data, but even that ‘hard data’ is interpreted by a subjective mind, usually a researcher who will have some baggage due to background/upbringing, and if no personal baggage, some institutional incentives to interpret the data one way or another. Hard data rarely exists in a vacuum. Institutions evaluate evidence, too and analysis methods will differ. If you can’t eradicate bias, just accept that we have it.

    Using different methods could produce contradictory evidence, as Hakan points out, and this might get us closer to ‘objectivity’, not completely there, but closer. But in bringing forward contradictory evidence, we might also disrupt power relations, introduce some chaos and uncertainty in our conclusions and so evidence used in this way can be tactical. Keeping an element of chaos and surprise in a fast-moving world has to be welcome.

    There over-reliance on evidence also begs the question to what extent can the views of marginalized groups be brought into the evidence mix – often they’re far removed from the research lab. SIDA have funded some nice work looking at people’s perception of education (The Reality Check Approach), which compliments the results-based M& E framework of its interventions. Multiple methods of this kind could build a richer evidence base and can be scaled up.

    Complexity – if aid is complex, how will evidence based policy help capture this complexity?

    My main question is around change in real world situations.

    Evidence based policy favours incremental change, looking at backward trends and patterns, and transformational change often requires a different approach Otherwise we’re saying we can achieve the same change with regressive analysis, predictions and forecasting as we can from the impact of someone like Barack Obama on American elections.

    Are we interested in incremental change or game-changing and transformational effect?

  20. “What Chris and Stefan do not mention are the perverse effects that a “results agenda” can have in a civil service bureaucracy, where the message maybe gets a bit mangled on the way down. Oxfam receives a lot money from DFID. DFID needs to support Oxfam’s initiatives to systematically examine its own effectiveness and learn from it (which Karl Hughes has posted on here in the past), rather than asking NGOs to count and report outputs (”bums on seats”) for the sole purpose of said numbers being reeled off during a minister’s grilling in parliament.” -James

    This comment further up in the thread aligns very well with my own lived experience working in M&E for a bilateral funded agricultural project in East Africa. I agree with Chris and Stefan in their argument that evidence-based decision making matters when dealing with complex issues, especially when there is a lack of upward accountability in the sector from ostensible beneficiaries. However, the well intentioned desire to hold implementing partners accountable for results often leads to a misplaced focus on numbers and figures such as ‘number of trainings held’ or ‘number of BDS provided’.

    I very highly doubt that this is the type of ‘evidence’ Chris and Stefan are referring to but it’s important to understand how this line of thinking manifests itself through bureaucratic structures such as USAID or DFID.

  21. chris Whitty and Stephan use examples of bridges and bovine diseases as examples of evidence based solutions that ‘worked’. Not too many bacteria/virus or bridges make independent decisions on a daily basis. We know what works, it’s the European US model – Bank Bailouts!!!!

Leave a comment

Translate »