Links I Liked

November 1, 2017

Is inequality going up or down?

November 1, 2017

How is evidence actually used in policy-making? A new framework from a global DFID programme

November 1, 2017
empty image
empty image

DavidRinnertGuest post from David Rinnert (@DRinnert) and Liz Brower (@liz_brower1), both of DFIDLizBrower

Over the last decade there has been significant investment in high-quality, policy-relevant research and evidence focussed on poverty reduction. For example, the American Economic Association’s registry for randomised controlled trials currently lists 1,294 studies in 106 countries, many of which have yielded insights directly relevant to the SDGs; there is an even larger number of qualitative evaluations and research on development interventions out there. However, as shown again and again (including on this blog), the generation of relevant evidence alone does not lead to improved development outcomes. In many countries and sectors, lack of demand for and use of evidence by decision-makers remains an issue. Policymakers, especially at lower levels, are not rewarded for innovation, and they lack the capacity and/or incentives to use evidence for decisions.

The UK Department for International Development (DFID) designed the Building Capacity to Use Research Evidence (BCURE) programme in 2013 to address this issue across 12 countries. Over the past four years, BCURE has promoted the use of evidence by decision makers, which has contributed to improved development outcomes and has generated many lessons on what works and what doesn’t in the promotion of evidence uptake. However, a key question for BCURE partners has been how to think more precisely about the increased use of evidence as an improved development measure, and how policymakers, researchers and practitioners should value the impact generated and communicate this in an effective way. There is a substantial body of literature discussing the value of research and evidence products (for example see here, here and here, or here on FP2P); however there has been less evidencefocus on unpacking and valuing evidence use. Building on existing literature and experience from the programme, BCURE has developed a typology for the type of evidence use by policymakers, namely: Transparent Use, Embedded Use and Instrumental Use, as defined in the table below. The framework can be used to capture the range of uses of evidence and offers a starting point for recording and comparing value through the avenues: scope, depth and sustainability.

Figure 1: BCURE Value of Evidence Use Framework

  Transparent Use Embedded Use  Instrumental  Use
Description Increased understanding and transparent use of (bodies of) evidence by policymakers. No direct action is taken as a result of the evidence, but use of evidence becomes embedded in processes, systems and working culture. Knowledge from robust evidence is used directly to inform policy or programme.
Examples  

 

 

 

BCURE VakaYiko: Several roundtables were held to help bridge the gap between research and policy making on climate change in Kenya and to help decision-makers acknowledge the full body of evidence on climate change in the country. BCURE Harvard: the researchers worked directly with government technicians to create a Report Dashboard designed to serve as a one-stop shop for over 50 indicators deemed crucial for evaluating MGNREGA. BCURE University of Johannesburg: In South Africa the evidence map, published by DPME, fed directly into the decision-making of the White Paper on Human Settlements. 
Scope: The array of policymakers impacted by the reform – is its impact far reaching across actors? +++ inter government, policy teams and country offices + One local government ministry +++ national level policy
Depth: Impact of change, how large is the size of the reform (for instrumental use it could eg be population reached)? Is there a substantial change from previous practice? + No in-depth change in practice that would be directly attributable to BCURE but contribution to a set of follow up actions  ++ Evidence tool created and saw immediate use, 150,000 hits in the first year. ++  (tbc) The Human Settlements Policy is potentially reaching a large proportion of the population, however, overall effect is yet to be determined based on M&E results
Sustainability: How sustainable is the change in the use of evidence? + One-off meetings but with potential to influence further changes in the use of evidence ++Evidence suggests this will be a prolonged change ++ Evidence used for several policy decisions with potential to influence further policy choices

One example of value created from evidence use comes from India. A BCURE implementing organisation – Evidence for Policy Design at Harvard Kennedy School – saw that the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) was producing vast amounts of data that could be used to monitor programme performance and spur improvements, but that, in reality, ended up sitting in an unused website database. With BCURE funding, the researchers worked directly with government technicians to create a Report Dashboard designed to serve as a one-stop shop for over 50 indicators deemed crucial for evaluating MGNREGA, an example of value generated through Embedded Use as illustrated above (full case study here). This tool saw wide instrumental use as well: over 150,000 hits by over 100,000 users in the year following the launch.

In South Africa, the University of Johannesburg through BCURE worked with the Department for Planning, Monitoring and Evaluation (DPME) to develop a policy-relevant evidence mapping tool. The evidence map offered the department visual and easy access to evidence. Other policy stakeholders benefited from the lessons learnt from the pilot evidence map – an example of Embedded Use discussed in the table. A range of public sector actors were invited to engage with the methodology (Transparent Use). The evidence map, hosted by DPME, fed directly into the decision-making of a White Paper on Human Settlements, an example of Instrumental Use of evidence.

While many BCURE projects achieved positive development outcomes, even the less successful pilots have helped MNREGAunpack some of the main challenges and barriers to evidence use. One of the key lessons is that we need to better understand politics and incentive structures in government organisations when promoting evidence uptake. Even if individual or organisational capacity exists, evidence may remain unused if groups with sufficient bargaining power lack incentives to improve policy outcomes. Promoting the use of evidence thus requires an in-depth understanding of the political economy, and a politically savvy implementation approach. Furthermore, BCURE was more successful where it helped coordinate between the often large amount of existing research organisations and projects, rather than creating new structures or mechanisms.

The examples and lessons from BCURE illustrate the variety of ways in which we can support the use of evidence by policymakers, and how this in turn can contribute to improved development outcomes. Our framework broadens the focus of evidence use above and beyond ‘instrumental use’, and lessons from this work highlight the importance of thinking and working politically in this area.

We would love to hear from other organisations on how they are tackling the evidence use puzzle. What else could and should be done to get more research and evidence into action in development?

David Rinnert is an Evaluation Adviser and Evidence Broker with the UK Department for International Development (DFID). Liz Brower is an Assistant Economist with DFID. The views expressed in this blog do not necessarily reflect the UK government’s official policies.

All reports, learning documents and other resources from BCURE can be found here.

7 comments

  1. The international literature on program evaluation suggests:

    Early studies on the impact of evaluations were based on directly observable effects, such as a policy change or the introduction of a new program initiative. This form of utilisation is defined as ‘instrumental’ use and refers to situations where an evaluation directly affects decision-making and influences changes in the program. Evidence for this type of utilisation involves decisions and actions that directly arise from the evaluation, including the implementation of recommendations.

    The second type is ‘conceptual’ use which is more indirect and relates to enlightenment or generating knowledge and understanding of a given area. Conceptual use refers to “the use of evaluations to influence thinking about issues in a general way.” Conceptual use occurs when an evaluation influences the way in which stakeholders think about a program, without any immediate new decisions being made about the program. Over time and given changes to the contextual and political circumstances surrounding the program, conceptual impacts can lead to instrumental impacts and hence significant program changes.

    ‘Political’ use involves the legitimising of decisions already made about a program. For example, an evaluation is commissioned with no intention of utilising the evaluation findings, but rather as a strategy to defer a decision. Alternatively, the evaluation follows after decision-making and provides a mechanism for retrospectively justifying decisions made on other grounds.

    ‘Symbolic’ use occurs when an organisation establishes an evaluation unit or undertakes an evaluation study to signal that they are good managers. The actual functions of the evaluation unit or the evaluation’s findings are of limited importance aside from their symbolic public relations value.

    Finally, ‘process’ use concerns how individuals and organizations are impacted upon as a result of participating in an evaluation. Being involved in an evaluation may lead to changes in the thoughts and behaviours of individuals which then results in cultural and organizational change. An example of process use is when those involved in the evaluation later say, “The impact on our program came not so much from the findings as from going through the thinking process that the evaluation required.

    1. Hi Scott, thank you for your comment. We had looked at other frameworks conceptualising the use of a piece of evidence / research, for example with the case you describe, relating to the use of an evaluation piece. The issue we found with this literature and the general: conceptual / symbolic / instrumental use framework – is that they focus on the production of evidence and the use of one specific piece of evidence. For example: an evaluation on programme x generated value because it was used in a symbolic way to influence policy y. This is really important to think about, for example for evaluation commissioners and evaluators, and David did some work on refining this framework in his paper on the value of evaluation – see here: https://assets.publishing.service.gov.uk/media/57b44793e5274a096b000000/Value_of_Evaluation_Discussion_Paper_-__FinalVersion_for_Publication_03082016_clean.pdf DFID also published a paper on the return to research products looking at some of the literature you’re referring to – see here: https://assets.publishing.service.gov.uk/media/57a089aced915d622c000343/impact-of-research-on-international-development.pdf
      However, what we are trying to think about in this blog is how you capture the value of evidence use more generally, not related to one specific piece of research or evaluation, and how you can compare this across stakeholders. This is closely related to questions regarding public sector reform as well as behaviour change. For the BCURE programme, the aim was not to create a piece of evidence and for that piece of evidence to be used in a conceptual / symbolic / instrumental way, rather the aim was to work with our partner government departments and organisations to increase their use of evidence in any policy work they undertake, hence to increase evidence-informed policy making more generally. We could not find a framework that was well fitted to this perspective. Liz & David

  2. Very interesting and relevant work, thanks!!
    However, I was curious to see the MGNREGA Dashboard in action, to better understand what the instrumental and embedded use of evidence would look like, and… what a dissapointment I got!!
    Both the MGNREGA Dashboard and the MGNREGA Public Data Portal behave, as of now, as irresponsive “dead” platforms.
    It is currently impossible to generate any report or display any graph or information. The Public Data Portal does not even allow to select the regions! I guess both systems have lost the connection to their backend databases. This could possibly be just a temporary issue. Let’s hope so!!
    However, well designed systems always provide users, in such cases, with informative messages confirming the issue has been recognised, indicating the potential sources for the problem (eg: “Failed connecting to the database”) and suggesting courses of action (eg: “Our technicians are working to solve it. You can contact us at XXX”).
    Cannot we do better?
    We want to “build capacity to use evidence” in others, but we are not even able to build a system that are responsive, user-friendly and robust?
    I really hope this was just a temporary problem, and the system is normally providing value to the 100.000+ users it claimed to have in the first year of use. Let’s hope this is not one more example of the thousands of unsustainable “development pilots” which start to deteriorate even before their final report is published.

    1. I’m writing on behalf of Evidence for Policy Design (EPoD) at Harvard Kennedy School. Thanks very much for your comment. You are correct that the dashboards are down at the moment, and it is indeed a temporary situation. In fact, the story behind the gap in service is interesting and speaks to the hurdles these kind of programs face, and the importance for organizations such as our to carry collaborations forward rather than just delivering tools and moving on:

      Earlier this year, the Government of India made a push link all data systems to Aadhaar, the government’s digital biometric system. This system has great promise, for instance in helping identify the poor for government assistance, but has come under criticism and litigation for possible privacy infringements. A recent court ruling led the government to require upgrades in all management information systems (MISs), and the dashboards have been down since then. Our team in Delhi is now working with government technicians on getting the dashboards back up.

      Regarding your comment on the message users get when they try to use the dashboards, you are certainly right that better messaging would be ideal. While the platform undergoes maintenance, it is supported by the Ministry’s architecture, and thus uses the existing messaging. We did not come in to overhaul the entire online system, however we do have frequent consultations with decision makers and advise them on technical issues. I’ve shared your comments with out team, and they will keep them in mind when giving recommendations to government technicians.

      A final side-note that may be of interest: a line of research followed on the BCURE pilot project that David and Liz describe in their post. We have developed a smartphone app that MGNREGA managers can use to track delays in workfare wages (a very common and highly publicized problem in the program) and identify the lower-level officials who are responsible. EPoD has worked with the Ministry of Rural Development to pilot this in two of India’s states, and preliminary results show that use of the app actually leads to fewer payment delays. So it appears that when a tool is used – and, importantly, when managers are trained in its use (trainings that we carried out) – it can go the ‘last mile’ and deliver better services to the poor.

      Vestal McIntyre, Evidence for Policy Design (EPoD) at Harvard Kennedy School

  3. Hi Duncan, David and Liz – very interesting post indeed: the BCURE programme has made a significant contribution to the debates around evidence. But Scott makes some good points – I do think it’s hard to improve on Carol Weiss’ typologies (https://tinyurl.com/yaodqejt). For example, I can’t quite understand what the difference between embedded and instrumental use is: it seems to me to be different points on a spectrum rather than different types of use? I was also wondering where the word inclusive might fit into the three categories: if we don’t consider whose voices are represented by the ‘evidence’ we increase the risk of defining evidence rather technocratically. Not that any of projects mentioned in this blog should be accused of that in any way – I know two of them well and know that they went to great pains to be inclusive – but I think there’s a danger of using this as a typology without considering the issue of inclusivity. In the South African part of the BCURE VakaYiko project, our government counterparts were very concerned with both inclusivity and participation so we developed five guidelines for departments wanting to use evidence better which highlights both those issues (https://tinyurl.com/y8vl7fky).

    A second point is about the nature of ‘embedding’. There’s a real need to consider how departments’ ongoing business processes (eg planning, budgeting, reporting) influence evidence use across all policy issues, not just the ones we’re interested in. Part of being politically smart (and I’d add being able to go with the grain) is really understanding and working with these deeply-embedded processes. We developed a framework for analysing evidence use in government departments (https://tinyurl.com/y9bttm2y). We then used it to inform our diagnosis of evidence use in the Department of Environmental Affairs (https://tinyurl.com/y78nuklq), which informed how we (a steering group from the department and the BCURE VakaYiko team) then devised a change process.

    So while robust evidence can be embedded within the policy process for a specific issue, what we really want is for the use of robust evidence to become part of business as usual within government departments. For that, we need to understand what business as usual looks like, how it’s currently expressed through these core processes of planning, budgeting and reporting and what that implies for where and how we can develop truly sustainable innovations.

    1. Hi Louise, thank you for your comment. Please see our response to Scott’s points above in relation to the ‘product-focused’ vs ‘use-focused’ framework. In terms of the difference between embedded and instrumental use: embedded use is related to a change in an organisation’s structure and/or processes which allows a greater (and more regular) use of evidence in day to day decision-making: for example the launch of an evidence committee that sits weekly to discuss the latest research related to policy issue x. Instrumental use would be that, perhaps, from one of those evidence committee meetings a new systematic review concludes that policy x is no longer effective and actually harms population y, because of this the government decides to change policy x. Much of the focus of the evidence use literature has been on individual cases of instrumental use, however we are arguing that embedded and transparent use can be of equal (if not greater) value. The three sub-categories (scope, depth, sustainability) help determine the scale of the overall development impact. As per our response to Scott’s points our framework is thus exactly about what you describe in the last paragraph of your comment, thinking about how evidence can become part of business as usual in government organisations.
      Your point on inclusivity is well taken but the framework should be applicable to all sorts of organisations involved in public policy (incl NGOs etc). Increased inclusion should also (partly) be reflected in the transparent use category but we will have another quick discussion on how to include this issue. Liz & David.

  4. Thanks Duncan, David and Liz for this interesting reflection linking evidence and policy. Critical area for further analysis – both for institutions and individuals (capacities, frameworks as well as political will to take this extra mile with evidence). Three complementing thoughts:

    The use of evidence at the policy level should not be limited to merely changing policy prescriptions. It must challenge the questions that policy makers ask of reality (poverty and programmes). Embedded in the policy prescriptions are biases of the institutions and individuals. Credible evidence must be seen as a tool to transform the institutions and individuals behind policy prescriptions. Evidence must be skilfully used to craft a lens for the policy makers to re-read reality (poverty & programmes).

    Next the use of the evidence for policy changes must first and foremost be biased in favour of policies at the grassroots levels. In the process of collations and aggregation of evidence we tend to loose the ‘credibility’ of the evidence. Perhaps the credibility of evidence would be most demonstrable if it is used at the grassroots levels (not just for macro level policies).

    Finally as grassroots organisations we should challenge the nature of evidence that policy makers and leaders of institutions prefer to see. Evidence must be redefined from the point of view of those on the margins and not those who frequent the corridors of power. For those on the margins 80% (if not significantly more) their life (behaviour, choices, relationships) are moulded by their World View, Faith, Spirituality, Religion etc. A quick scan of the nature of evidence that humanitarian industry tends to parade seems to ignore this reality and consequently arrive at (policy) conclusions that ignore that most of the world’s poor live in a reality that I moulded by their faith.

    For our evidence to be radically transforming we need to ask some serious questions about the ‘evidence’ itself – before the questions relating to frameworks, capacities and policy implications.

    Thanks for raising this crucial question/ opportunity.

    jayakumar christian (formerly with World Vision)

Leave a comment

Translate »