Links I Liked

October 10, 2017

Hugo Slim sets me straight on the state of humanitarianism

October 10, 2017

Want to ensure your research influences policy? Top advice from a Foreign Office insider.

October 10, 2017
empty image
empty image

The conference on ‘Protracted Conflict, Aid and Development’ that I wrote about on Friday was funded by the Global research impact and social mediaChallenges Research Fund, a massive (£1.5bn) UK research programme that is funding, among other things, the LSE’s new Centre for Public Authority and International Development, where I’ll be putting in a day a week over the next few years.

Not surprising, therefore, that the topic of ‘research for impact’ kept coming up. A group of us at Oxfam are planning to put a paper together on this (we think we are quite good at it, sometimes), but in the meantime, here are some thoughts based on the conference.

Babu Rahman, a top research analyst at the British Foreign and Commonwealth Office (FCO), gave some commendably clear advice on what government officials need:

‘What we want from research is not ‘it’s complicated’ or ‘here’s the answer’, but comparative work highlighting a range of possible solutions, showing how particular tools and approaches have worked out. Most useful is understanding where something has/hasn’t worked and why. Then we can apply that to a new situation.

Statistical surveys alone are not that useful – they can generate false confidence or aversion. Multi-disciplinary approaches can be very helpful – even in helping government break down internal siloes. Case studies are really helpful, but limited in generating transferrable lessons – there is a perennial risk of recreate experiences from one place in another, as if they’re templates.

Three quick points on how to make research more useful to officials. All within the broad paradox that civil servants are assessed on their ability to simplify complex issues down to the key components necessary to make a decision, whereas academics’ value lies in illuminating complexity:

  1. Make written work short but not dumb. That requires significant intellectual athleticism.
  2. Avoid jargon and assumed knowledge. We don’t need lots of text on methodology
  3. Structure is really important – go straight to your point in headlines and bullets.

Pathways_to_Impact (1)We’re seeing more academics producing abstracts and executive summaries, but they are too often abstracts rather than elevator pitches. Senior officials may have only 30 seconds to get hooked (or not) on what you are trying to say.

Attitude: be accessible – what’s often most helpful is when you can sit down with someone and talk to them about your problem. That requires trust and discretion. You have to be able to find the time and navigate the risk of compromise, and your role may not be acknowledged.

The single greatest risk of getting research into policy is that we won’t read it!’

Great stuff. I had a two minute pitch at the end of the conference, by which time most people were comatose, so for them (and anyone else who’s interested), here’s what I added to Babu in messages for academic researchers:

  1. Above all, assume that no-one is going to read your paper. What else can you give them instead? A good exec sum? A blog? A killer fact?
  2. It’s really hard to retain anything from reading a piece of research that has no overall narrative, but it is often equally hard for the researcher to identify a narrative that does not do violence to the research. Nevertheless, we have to try – road testing possible narratives ought to be something researchers spend a lot of time doing, especially towards the end of their research project.
  3. Timing: How can researchers adapt their messages to the rhythm of decision-making set out in the policy are we making an impactfunnel. Think about crises: Officials are most open to new ideas immediately after a crisis/scandal or other ‘critical juncture’, so how can researchers spot these windows of opportunity, drop what they’re doing if necessary, and feed ideas to the suddenly interested bureaucrats and pols? It’s not that easy because crises are also when officials/politicians are busiest and most harassed, so there’s no point in just sticking your old report in the post. The key is to cultivate relationships and trust in peacetime, so that when it all kicks off, you can pick up the phone or drop someone an email offering to help.
  4. A distressing amount of academic research on aid treats practitioners as fools, or knaves, or both. Try assuming that practitioners are actually smarter than you are, but don’t have the time to do all that thinking and reading, so you are providing a valuable service. That should help avoid some of the plagues of straw men (‘aid does X’, ‘aid people think Y’, any sentence involving ‘neoliberalism’) which are so crude and silly that you immediately give up on the paper (as in this really annoying example)

Which leaves me with two big questions to ponder

  1. Are funders set up to support this kind of research?
  2. To what extent do academic career incentives encourage/prevent these ways of working?

I fear the answer to both is often pretty negative. Thoughts?

6 comments

  1. hi Duncan,
    great blog, thanks! It’s particularly good to be reminded (as we get into the final year of ASSAR and dissemination of our findings) to keep it short and simple, and the importance of showcasing alternative potential solutions or approaches, since it’s not like we have all the answers in any case! The appreciation of multi-disciplinarity is also welcome (phew…), and interesting to see how that can (and should) be used for helping to break down sectoral silos… I wonder if the experience of multi-disciplinary research teams could shed some light on how this could be done in government contexts – quite an interesting question, I think.
    Onto your question, I think that funders are increasingly thinking about and funding research for impact, but I think the models can still be improved, in terms of, for instance, timing/length: no matter how well you plan, researchers will take up 90% of the time to produce the findings, leaving very little time for influencing and impact. I know we are supposed to do influencing all along, but realistically, the most impactful findings do only come out towards the end, when you start synthesising across all the different pieces of work and get glimpses of how things fit together and can (hopefully) be changed. So is the secret adding more time? Not sure, as that will mean the researchers take even longer! Or have a shift in team composition (more knowledge brokers, less researchers) as the project matures, leaving time for influencing?
    The other aspect, as you recognise in your second question, is for researchers to be incentivised and educated about the importance of contributing to the solution of problems in the real world, and not only academically. I think that requires a shift in how universities reward and teach students/ future academics.
    Anyway.. longer response than what I intended, hope you are well! And keen to keep discussing these issues!

  2. Good to keep this set of issues front and center. Nice summary, but really nothing new here. Fifty plus years of research on knowledge utilization has repeatedly pointed out these issues (e.g., timing, means of presentation, technical simplicity, etc.) that are correlated with use. But we still continue to see lots of laments over the lack of use of scientific research (and evaluation as well). I believe we need a far better understanding (and hence some empirical research) on how research/scientific evidence/data are used in making policy arguments. We need to better understand the making of policy, that is, research on the decision process itself and how/when/why scientific evidence is embedded in policy argumentation. We argued this in Using Science as Evidence in Public Policy, published by the US National Academies Press and available as a pdf download at https://www.nap.edu/catalog/13460/using-science-as-evidence-in-public-policy

    Thanks for this always stimulating blog!

  3. Great blog, and good to be reminded of these things again though—as Thomas says above—we really ought to know all of this by now. But there’s another side of this issue, which is whether policymakers are articulating their questions well enough for researchers to work out how to help them. Poorly specified questions make it hard for evidence providers to know what is likely to be relevant and to plan for when it’s likely to be needed. I did some work for a govt department a while back: my clients asked me to look at a particular evidence base to which they had access and tell them how policy-relevant it was. This was difficult because they didn’t have clear policy questions, just rather vague goal statements. So all the evidence was broadly ‘policy-relevant’, but there was nothing against which a researcher could concretely plan what evidence they’d need to provide, or how and when to communicate it.

    In the UK, Defra and the Food Standards Agency have done a lot of work over the past ten or so years to develop a strategic approach to acquiring the evidence they need (see Defra’s here: https://www.gov.uk/government/publications/evidence-strategy-for-defra-and-its-network and the FSA’s here: https://www.food.gov.uk/science/sci-gov/scistrat). All UK government departments have recently been asked to publish ‘Areas of Research Interest’ (https://www.gov.uk/government/collections/areas-of-research-interest) to help communicate their evidence needs better. Defra’s Chief Scientific Adviser recently said that “From once being a significant sponsor of research, the Defra group has had to change to become a better user of research. It is becoming a customer, rather than a supplier or sponsor of research. As a customer, the Defra group needs to lead the intellectual agenda with respect what questions should be tackled by research.” (https://ianlboyd.wordpress.com/). But they don’t do this by sitting in a dark room on their own: there’s broad engagement between policy and academic communities to work out what the key policy questions are and how best they can be addressed (see a summary of Defra’s approach here http://www.ksi-indonesia.org/files/1421384737$1$QBTM0U$.pdf).

    Under DFID’s BCURE programme we worked with South Africa’s Department of Environmental Affairs to develop something similar for the Biodiversity & Conservation theme: an evidence strategy (https://www.environment.gov.za/sites/default/files/docs/biodiversity_research_strategy.pdf and ) associated implementation plan (https://www.environment.gov.za/sites/default/files/docs/biodiversity_strategy_implementationplan.pdf). The latter gets updated annually in workshops that involve a wide range of government and research stakeholders. And we developed a set of guidelines to help government departments think through what it takes to implement an evidence-informed approach internally (https://www.odi.org/publications/10604-guidelines-and-good-practices-evidence-informed-policy-making-government-department) – which includes more explicitly linking evidence needs to policy priorities.

    Your blog reminds us that there’s still a lot that researchers and other evidence providers can do, but I don’t think it’s their fault alone that the communication doesn’t always work well. I think the BCURE programme was absolutely right in trying to understand how to strengthen the demand for evidence by govt departments and help them make it more explicit. There’s a good deal more work that needs to be done there.

  4. The trick is not avoiding jargon and assumed knowledge, but rather learning the jargon and assumed knowledge of one’s policy world interlocutor. Many academics have only a vague idea of what policy is, let alone when and how evidence might be useful in the process. For example, most scholars think that policy is something that happens in parliaments or ministries (congress/departments), but in my experience researchers can often have a bigger impact on how interventions are designed than on what national-level policies are adopted, especially in the increasingly evidence-free policy world of my home country.

    So the question is one of how to help researchers learn the culture of the world they want to influence. Scholars who research topic X and policy makers/implementers who deal with topic X often speak in quite different vocabularies, so how do we make it possible for scholars to speak policy well enough to be able to make the connection between their research and actions the policy maker might want to take? Scholars could study internally produced documents of the government entity on policy X in order to learn the language. Another way is for government and implementors to offer more short-term fellowships to scholars who can study their interlocutors in a more anthropological way, providing useful advice along the way. Another is for government and implementers to build internal research and evaluation units that know both how to reach out to academics and how to translate research findings into useful, program-relevant findings.

    But also, to Lucia’s point and to the cartoon at the top of the blog post: yes, academic research teams also need a knowledge broker, especially if they are taking public funds for their research. Someone who understands the nuts and bolts of research, cares about the topic, knows how to craft a narrative, and can tease an exciting finding in 140 characters is to be treasured! Universities and scholarly associations should provide plenty of professional development opportunities for grad students and junior scholars to learn these skills.

  5. This post reminds me of a favorite JadedAid.com card: “A research grant to find evidence that evidence-based research for policy makers is used by policy makers to make evidence-based policy.”

    It goes to the deeper point that even if a policy maker reads evidence (which it of itself is a massive hurdle), it doesn’t mean they’ll make descisions based on that evidence vs the many other factors they use to make policy.

  6. Hi Duncan,
    Thanks for the post. While I fully agree on your advice and the policy maker recommendations, I believe the article still discusses “technical” aspects of communicating research or influencing policy with research. However, a more political approach is needed if we want research to actually be used in policy decisions. A few policy makers might read a research piece if submitted in the right format, but what happens if public agencies do not have a culture of consuming, analyzing and discussing knowledge in their decisions, what happen with political constraints, etc. I believe that we need to go beyond these comms tips, which are necessary but not enough (as Louise comments: “we really ought to know all of this by now”), and work with both researchers and policy makers to understand what are the entry points in public organisation to incorporate evidence into decisions.
    Leandro

Leave a comment

Translate »