Should we (and everyone in Davos) worry about extreme wealth? New Oxfam briefing

January 22, 2013

The evidence debate continues: Chris Whitty and Stefan Dercon respond from DFID

January 22, 2013

The political implications of evidence-based approaches (aka start of this week’s wonkwar on the results agenda)

January 22, 2013
empty image
empty image
The political implications of evidence-based approaches
The debate on evidence and results continues to rage. Rosalind Eyben and Chris Roche, two of the organiser’s of next April’s Big Push Forward conference on the Politics of  Evidence, kick off a discussion. Tomorrow Chris Whitty, DFID’s Director of Research and Evidence and Chief Scientific Adviser, and Stefan Dercon, its Chief Economist, respond
Distinct from its more general usage of what is observed or experienced, ‘evidence’ has acquired a particular meaning relating to proof about ‘what works’, particularly through robust evidence from rigorous experimental trials. But no-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions. Most development workers do not see themselves as scientists in a laboratory, but more as reflective practitioners seeking to learn how to support locally generated transformative processes for greater equity and social justice. Where have these experimental approaches come from and what is at stake?
The origins and critiques of evidence-based approaches
Evidence-based approaches are pre-occupied with avoiding bias and increasing the precision of estimates of effect. In the UK they spread beyond clinical practice when the government elected in 1997 was keen to demonstrate that its decisions would not be driven by political ideology but rather by objective evidence. Evidence-based approaches became linked to value for money concerns to deliver ‘results’ as efficiently and effectively as possible, by a government recently described as ‘truth junkies’.
Yet, even within medicine, the leap from evidence-based clinical practice into evidence-based policy was challenged. A British Medical Journal article by Nick Black in 2001 drew on an extensive body of contemporary literature on policy processes to argue that policy was shaped by institutional arrangements, values and beliefs and a variety of different sources of information.  Opponents of evidence-based education critiqued its positivist assumptions; its linear cause-effect thinking; and the poor understanding of the tensions between scientific and democratic control of educational practice. An OECD report in 2007 on the reasons for the uptake of evidence-based education  in Sweden, the UK, Canada, the USA and Australia, noted the increasing pressure for greater accountability of expenditure and effectiveness, an explosion in the search for measurable outcomes, and demands that impacts and effectiveness be given a monetary value.  The same report noted that evidence-based approaches were largely absent in OECD countries ‘less used to empirical and quantitative methodologies in the social sciences.’ The de-politicization of policy making is one of the reasons given by  development researchers for its neglect in France.
In UK social policy, evidence based approaches with their ‘gold standard’ of experimental or quasi-experimental design, have been criticised as being inapplicable to complex issues.In What Works, Tony Harrison argued that evidence-based approaches can only apply in cases of individual treatment and not at the wider community level where multiple perspectives come into play and no agreement exists about the nature of the problem.This of course is the case with most development programmes, and in particular those that seek transformational change.
Evidence based approaches in development: an anti-politics firewall?
Arguably evidence-based approaches build an anti-politics firewall. Development assistance becomes a ‘technical’ best practice intervention based on rigorous objective evidence, delivering best value for money to domestic taxpayers and recipient country citizens mostly without interfering in that country’s politics. They are the latest manifestation of  a certain long-standing approach to development that as Timothy Mitchell wrote in the Power of Development (J. Crush et al 1997), speaks to the sector’s ‘need to overlook its internal involvement in the places and problems it analyses and present itself instead as an external intelligence that stands outside the objects it describes’.
In the 1930’s Africa was seen as ‘a living laboratory’ to achieve improvements in the welfare of the populations . Evidence-based approaches are reviving the development as laboratory idea. In 2012 the World Bank established a Gender Innovation Lab to design ‘innovative interventions to address gender inequality and to develop rigorous research projects in order to produce evidence on what works and what does not’.  Jeffrey Sachs’ Millennium Villages have been framed as  ‘laboratories to lift people out of poverty’. The most well-known is the J-Pal Poverty Action Lab whose mission is to reduce poverty ‘by ensuring that policy is based on scientific evidence’.
In the absence of political debate, this approach can exacerbate the tendency to see people as subjects requiring treatment, rather than as citizens with political voice. Power silences any challenges to the technical framing of ‘the problem’, foreclosing discussion of the structural causes and consequences of inequity and how these should be tackled.To act ‘technically’ in a politically complex context can make external actors pawns of more powerful vested interests and therefore by default makes them, albeit unintentionally, political actors.
High stakes
Evidence-based technical approaches can therefore deflect attention from the centrality of power, politics and ideology in shaping society. We agree with the view of the Developmental Leadership Program that recent research suggests that  the development sector should be ‘at the frontier of a narrative shift between a technical, rational, and scientific approach to development, and a recognition that politics matters; that poverty reduction is not a technical problem but requires significant social change, and that this social change is, and must be, both political and locally led.’ However this has some significant implications for external actors. We need to be self-aware to avoid disempowering others.  This requires undertaking power analyses with ourselves factored in – as organisations and individuals who can make a positive or negative contribution, often inadvertently.  It means engaging with a wider and more diverse group of policy actors in the state, civil society and the private sector; whenever possible, supporting debate, locally-driven problem solving, and independent research. It means avoiding overly linear project based aid modalities that demand omniscience before they have even begun.
As Michael Sandel has recently argued in his book about the moral limits to markets  how we put values (and prices) on things can change their meaning, as well as change the relationship between economic actors.  More information, dare we say ‘evidence’, is needed to draw some firmer conclusions about the consequences of evidence-based approaches to designing projects and assessing results. This is why the Big Push Forward is currently seeking to crowd-source more information from development practitioners about how they actually experience the ‘results’ agenda, and why we believe this issue needs more debate.

Ros Eyben portraitThe debate on evidence and results continues to rage. Rosalind Eyben (left) and Chris Roche (right, dressed for battle), two of the organisers of Chris Roche in XianApril’s Big Push Forward conference on the Politics of  Evidence, kick off a discussion. Tomorrow Chris Whitty, DFID’s Director of Research and Evidence and Chief Scientific Adviser, and Stefan Dercon, its Chief Economist, respond

Distinct from its more general usage of what is observed or experienced, ‘evidence’ has acquired a particular meaning relating to proof about ‘what works’, particularly through robust evidence from rigorous experimental trials. But no-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions. Most development workers do not see themselves as scientists in a laboratory, but more as reflective practitioners seeking to learn how to support locally generated transformative processes for greater equity and social justice. Where have these experimental approaches come from and what is at stake?

The origins and critiques of evidence-based approaches

Evidence-based approaches are pre-occupied with avoiding bias and increasing the precision of estimates of effect. In the UK they spread beyond clinical practice when the government elected in 1997 was keen to demonstrate that its decisions would not be driven by political ideology but rather by objective evidence. Evidence-based approaches became linked to value for money concerns to deliver ‘results’ as efficiently and effectively as possible, by a government recently described as ‘truth junkies’.

Yet, even within medicine, the leap from evidence-based clinical practice into evidence-based policy was challenged. A British Medical Journal article by Nick Black in 2001 drew on an extensive body of contemporary literature on policy processes to argue that policy was shaped by institutional arrangements, values and beliefs and a variety of different sources of information.

Opponents of evidence-based education critiqued its positivist assumptions; its linear cause-effect thinking; and the poor understanding of the tensions between scientific and democratic control of educational practice. An OECD report in 2007 on the reasons for the uptake of evidence-based education in Sweden, the UK, Canada, the USA and Australia, noted the increasing pressure for greater accountability of expenditure and effectiveness, an explosion in the search for measurable outcomes, and demands that impacts and effectiveness be given a monetary value.

The same report noted that evidence-based approaches were largely absent in OECD countries ‘less used to empirical and quantitative methodologies in the social sciences.’ The de-politicization of policy making is one of the reasons given by development researchers for its neglect in France.

In UK social policy, evidence-based approaches with their ‘gold standard’ of experimental or quasi-experimental design, have been criticised as being inapplicable to complex issues. In What Works, Tony Harrison argued that evidence-based approaches can only apply in cases of individual treatment and not at the wider community level where multiple perspectives come into play and no agreement exists about the nature of the problem.This of course is the case with most development programmes, and in particular those that seek transformational change.

Evidence based approaches in development: an anti-politics firewall?

Gandhi v logframe cartoonArguably evidence-based approaches build an anti-politics firewall. Development assistance becomes a ‘technical’ best practice intervention based on rigorous objective evidence, delivering best value for money to domestic taxpayers and recipient country citizens mostly without interfering in that country’s politics. They are the latest manifestation of  a certain long-standing approach to development that as Timothy Mitchell wrote in Power of Development, speaks to the sector’s ‘need to overlook its internal involvement in the places and problems it analyses and present itself instead as an external intelligence that stands outside the objects it describes’.

In the 1930s Africa was seen as ‘a living laboratory’ to achieve improvements in the welfare of the populations. Evidence-based approaches are reviving the development-as-laboratory idea. In 2012 the World Bank established a Gender Innovation Lab to design ‘innovative interventions to address gender inequality and to develop rigorous research projects in order to produce evidence on what works and what does not’.  Jeffrey Sachs’ Millennium Villages have been framed as  ‘laboratories to lift people out of poverty’. The most well-known is the J-Pal Poverty Action Lab whose mission is to reduce poverty ‘by ensuring that policy is based on scientific evidence’.

In the absence of political debate, this approach can exacerbate the tendency to see people as subjects requiring treatment, rather than as citizens with political voice. Power silences any challenges to the technical framing of ‘the problem’, foreclosing discussion of the structural causes and consequences of inequity and how these should be tackled. To act ‘technically’ in a politically complex context can make external actors pawns of more powerful vested interests and therefore by default makes them, albeit unintentionally, political actors.

High stakes

Evidence-based technical approaches can therefore deflect attention from the centrality of power, politics and ideology in shaping society. We agree with the view of the Developmental Leadership Program that recent research suggests that  the development sector should be ‘at the frontier of a narrative shift between a technical, rational, and scientific approach to development, and a recognition that politics matters; that poverty reduction is not a technical problem but requires significant social change, and that this social change is, and must be, both political and locally led.’

However this has some significant implications for external actors. We need to be self-aware to avoid disempowering others.  This requires undertaking power analyses with ourselves factored in – as organisations and individuals who can make a positive or negative contribution, often inadvertently.  It means engaging with a wider and more diverse group of policy actors in the state, civil society and the private sector; whenever possible, supporting debate, locally-driven problem solving, and independent research. It means avoiding overly linear project-based aid modalities that demand omniscience before they have even begun.

As Michael Sandel has recently argued in his book about the moral limits to markets, how we put values (and prices) on things can change their meaning, as well as change the relationship between economic actors.  More information, (dare we say ‘evidence’), is needed to draw some firmer conclusions about the consequences of evidence-based approaches to designing projects and assessing results. This is why the Big Push Forward is currently seeking to crowd-source more information from development practitioners about how they actually experience the ‘results’ agenda, and why we believe this issue needs more debate.

And make sure you come back tomorrow for DFID’s counterblast

26 comments

  1. Thanks Duncan, as an immersed and independent evaluation practitioner I often read and follow your links with interest. I also look forward to the subsequent articles. I tried to access Nick Black’s article you referenced, but, alas, the price tag is a bit steep, and I could not find it elsewhere. Do you have any suggestions?

  2. I work for Oxfam as a Country Director, and I was at a meeting a few months ago where we were asked to rank ourselves on a scale of one to ten on whether we utilised power in our role (everyone in the room was a director or senior manager). Interestingly, most people put themselves in the centre, and a couple of people ranked themselves as low as a two.

    This experience leapt into my mind on reading that ‘This requires undertaking power analyses with ourselves factored in’. For an organisation that talks constantly about power and power relations, many if not most of our own people are apparently not comfortable with recognising our own. I’d be interested to know if this holds true across other INGOs, and compare that against World Bankers, UN staff etc.

    Ah well, at least it shows humility compared to the Investment Bank ‘Masters of the Universe’.

  3. But the absence of evidence can be just as bad. In 1986 I travelled from Kenya to Burkina Faso to look at the cooking stoves there. In East Africa we had tested every design, first in the lab., then in 400 households, so we knew that that the Kenya Ceramic Jiko used 1/3 less charcoal.

    But in Burkina Faso, 20 wood stoves had been designed by European male development workers, and only 2 of them were more efficient than a 3 stone fire managed by an African woman. The hippies hadn’t noticed how crap their work was.

  4. The notion that evidence is proof shows how poorly trained in scientific discourse and method, people having this discussion in development happen to be. And then for anyone to take serious the idea that evidence-based education is a form of colonialism grounded on some kind of authoritarianism of western canons is hardly worth discussing. Literacy is better than illiteracy and no-polio is better than polio or is this something of which academics are not entirely sure?

  5. To wit, evidence operates–as in empirical sciences–with an hypothesis as its ground and an heuristic principle as its aim. The predictions is based on statistical regularity and NOT on proof. Proof is a term for mathematicians and logicians. The function of replication in empirical sciences is to asses the possibility of deviation, which makes for better predictions. The fact that evidence can lead to bad inferences–as you see in these conversations–does not mean that evidence can be set aside to play relativist games in science. To suggest that some cultures should be respected in their assertion that the moon is made of cheese seems both disingenuous and condescending.

  6. Thanks so much Duncan for this informative theme by bringing all the elements together.

    There used to be a time, at least for us Oxfam workers when process and therefore the results -both what is success as well as failure, in otherwords breakthroughs and breakdowns were both important. The frame work of action/reflection was good enough for learning and being effective.

    Nodoubt there was so much space for experimentation too in what and how we intervened.We used to be subjected to periodic questioning on accountability etc but mostly got away with some strong moral and some what self righteous defence when pushed too hard and it used to work.
    I also want to share that two other key factors that Pushed this debate.The first one is the fact that we moved into the arena of campaign and advocacy where we questioned official investment including aid -political agenda. The second is the bigger role that official aid played in that it become more important to the functioning of charity in developing countires. Both of these meant that the subject matter become far more politicised and externally driven.
    My unshooled mother used to say(Bless Her)that not everything that I tell you or teach you has an economic value but it helps with life/family.Indeed one of the early experience in South India as a campaigner when we were talking to a villager about the impact of a new polyester factory- he said”The air you breath here is foul,the water in the river has become black and causes so many health problems to children and adults-so what more evedence you want?”

  7. Great discussion! Calls to mind the “Dr. Strangelove Syndrome,” which refers to development projects that serve as vehicles for social engineering to inculcate and reinforce behavioral responses and attitudes believed to be productive or desirable. This is one of five syndromes described in a chapter on learning from failure in “They Know How,” written in 1977 by staff of the Inter-American Foundation. The others are:
    “Lawrence of Arabia Syndrome,” when projects are built around a strong central figure and characterized by autocratic decisionmaking and paternalism.
    “Pollyanna Syndrome,” when management expertise, marketing ability or administrative skills are naively underestimated in the romantic belief that right will emerge triumphant.
    “Messiah Syndrome,” when an administrative elite advocates and implements a project idea that has not been internalized by the affected community.
    “Artificial Insemination Syndrome,” when external funding artificially induces or sustains the vitality and momentum of an otherwise static project idea that lacks community commitment to action.
    The names may be a bit dated… but how many times over the last 35 years have we seen these five syndromes for failure repeated?

  8. It is a bit of a catch though I suppose. At some level some sort of “evidence” is necessary – it would be impossible to expect any donor to provide money without some sort of results as feedback.

    Perhaps the debate could move towards what we mean by “evidence” and that would shape the way we not only design our programs, but also how we implement them and how we measure our own participation and the programs throughout its various stages.

    In some cases pure evidence is useful, but I agree with you Duncan that in others it does not provide an analysis of a complex issue.

    I think I definitely need to read more about this though to fully understand it and the questions that it posed.

  9. I fully agree on the need for more debate on this. I just read recently about experimental research on corruption (see http://www.corruptionresearchnetwork.org), which claims that “laboratory experiments make it possible to investigate determinants of corruption, such as intrinsic motivations, social norms, gender effects and cultural determinants of corrupt behaviour”.

    I am skeptical because I remember from my philosophy courses that truth is ultimately a matter of socially and historically conditioned agreement. Someone wrote that what we come to consider as fact or evidence depends on the way we have been conditioned to see the world. On one hand are believers in the admirably ‘objective’ results of the sciences; on the other are those who recognise the existence and complexity of consciousness and human will.

    As the cartoon on Gandhi shows, the central disagreement is not only on how social phenomena are to be understood, but also how they may be acted upon.

  10. From some of the tweets going around about yesterday’s post, Chris and I appear to be against knowledge! Just to be very clear that we are talking about the power relations involving the generation and use of knowledge in policy and practice and particularly respecting how we understand development processes.We are critiquing ‘evidence’ as a particular discourse that influences what is thinkable and do-able in the politics of development. Inequities in power and voice mean that only some approaches to knowledge are judged acceptable and when these are called ‘objective’ it makes them unquestionable – see Eric’s example (above) of the corruption study. We take a critical realist position to knowledge;this means we do not necessarily assume that we can know the world out there independently of the ways in which we understand and act in it in relation with others.

  11. “evidence-based approaches can only apply in cases of individual treatment and not at the wider community level where multiple perspectives come into play and no agreement exists about the nature of the problem”

    That “it’s complicated” doesn’t seem a very convincing argument for ignoring evidence based approaches.

    A good evidence based approach should define the question(including the “perspective”), control as many confounding variables as possible, and answer the question. The fact that there are are lots of confounding variables, some of which are political, economic or cultural isn’t a good argument for taking a non-evidence based approach rather it’s about getting better evidence.

    It is possible that ‘getting the evidence’ is very expensive. Spending money without getting the evidence that the money is being effectively used is a bad idea – even if it seems to be working, there might be better ‘untested’ approaches.

  12. Good stuff. Of course, just to nudget things a little further down the rabbit hole, another issue with power analysis and ‘gold standard’ asssessments is that we tend to do them ourselves, and so end up with the danger of Gareth’s ‘I scored myself a 9′ (or indeed ‘I scored myself a 1′). We’re trying hard to make power analysis a core of what we do, but to do it properly, which i think does involve allowing our partners/beneficiaries/principals (what else are we calling them now??) ‘do power’ to us. What little insight we have to this so far has been refreshing.

    But then also to play devil’s advocate on the gold standard stuff – speaking specifically on the example of the BOND effectiveness criteria – i don’t think that the idea of this was we set up a system whereby we are able to take something and say ‘behold! this is the gold standard, now follow ye this!’, but rather to be better, espcially in hard to measure arena, at showing that our assertions and results are ‘good enough’, and at least meet A standard that THEN enables us to look more deeply at what’s going on. It’s not perfect for sure, but the alternative is that we just get lost in the argument before the evidence is even on the table. The case continues…

  13. This seems like a call to forget trying to understand developing societies and just jolly well get on and design programmes and “do” development.

    Makes me think of this brilliant quote from Forrester trying to counter this exact argument:

    Our social systems are far more complex and harder to understand than our technological systems. Why, then, do we not use the same approach of making models of social systems and conducting laboratory experiments on those models before we try new laws and government programs in real life? The answer is often stated that our knowledge of social systems is insufficient for constructing useful models. But what justification can there be for the apparent assumption that we do not know enough to construct models but believe we do know enough to directly design new social systems by passing laws and starting new social programs?/em>

    Ha ha! Amazing stuff…

  14. ok I’ll find some other way to manipulate the evidence then! (to interpret the evidence, gather the evidence, use the evidence, value the evidence, construct the evidence, trust the evidence etc. etc. etc.).

  15. Oh, and thanks to Rob Levy for pointing us to that fabulous New Statesman article by Alan Moore. Do read it if you have a moment away from collecting all that evidence. My favorite line? “A lead pipe, in and of itself, is after all just a lead pipe and needs considerable human interpretaion to connnect it to Professor Plum and the conservatory. It is in this dependence on the unreliable perceptions and concealed agendas of an individual that we finally identify the weak spot of this domineering thug, evidence.”

  16. To Rob levy (@aid_complexity)

    On Alan Moore, respectfully:

    “Beautiful prose
    But,
    What does he propose?”

  17. Well said Duncan. Where do I read Chris Roche etc? The point you make, that poverty reduction is not a technical problem but requires significant social change, and that this social change is, and must be, both political and locally led.’ is a good one. Unfortunately Oxfam, like most INGOs does not practice this very well. Country offices ‘top heavy’ with expats are the norm, especially in the Pacific – not too much ‘locally led’.

  18. “poverty reduction is not a technical problem but . . .” So, we must not merely do good things, i.e. things that actually “end extreme poverty” (which is the Millennium Villages objective), but we must do the very best things and we must do them in the very best manner. As someone from the hoi polloi, I want only to say “I’m not persuaded.” Show me that you can solve the simpler, easier problem before asking us for leave to solve the more complex, harder problem.

Leave a comment