Aidspeak: some of your best/worst responses to my call for examples

January 4, 2018

$15bn is spent every year on training, with disappointing results. Why the aid industry needs to rethink ‘capacity building’.

January 4, 2018

Want to ensure your research influences policy? Top advice from a Foreign Office insider.

January 4, 2018
empty image
empty image

The most read posts from 2017, in reverse order. Here’s number 4. Check out the original if you want to read the comments.

The conference on ‘Protracted Conflict, Aid and Development’ that I wrote about on Friday was funded by the Global research impact and social mediaChallenges Research Fund, a massive (£1.5bn) UK research programme that is funding, among other things, the LSE’s new Centre for Public Authority and International Development, where I’ll be putting in a day a week over the next few years.

Not surprising, therefore, that the topic of ‘research for impact’ kept coming up. A group of us at Oxfam are planning to put a paper together on this (we think we are quite good at it, sometimes), but in the meantime, here are some thoughts based on the conference.

Babu Rahman, a top research analyst at the British Foreign and Commonwealth Office (FCO), gave some commendably clear advice on what government officials need:

‘What we want from research is not ‘it’s complicated’ or ‘here’s the answer’, but comparative work highlighting a range of possible solutions, showing how particular tools and approaches have worked out. Most useful is understanding where something has/hasn’t worked and why. Then we can apply that to a new situation.

Statistical surveys alone are not that useful – they can generate false confidence or aversion. Multi-disciplinary approaches can be very helpful – even in helping government break down internal siloes. Case studies are really helpful, but limited in generating transferrable lessons – there is a perennial risk of recreate experiences from one place in another, as if they’re templates.

Three quick points on how to make research more useful to officials. All within the broad paradox that civil servants are assessed on their ability to simplify complex issues down to the key components necessary to make a decision, whereas academics’ value lies in illuminating complexity:

  1. Make written work short but not dumb. That requires significant intellectual athleticism.
  2. Avoid jargon and assumed knowledge. We don’t need lots of text on methodology
  3. Structure is really important – go straight to your point in headlines and bullets.

Pathways_to_Impact (1)We’re seeing more academics producing abstracts and executive summaries, but they are too often abstracts rather than elevator pitches. Senior officials may have only 30 seconds to get hooked (or not) on what you are trying to say.

Attitude: be accessible – what’s often most helpful is when you can sit down with someone and talk to them about your problem. That requires trust and discretion. You have to be able to find the time and navigate the risk of compromise, and your role may not be acknowledged.

The single greatest risk of getting research into policy is that we won’t read it!’

Great stuff. I had a two minute pitch at the end of the conference, by which time most people were comatose, so for them (and anyone else who’s interested), here’s what I added to Babu in messages for academic researchers:

  1. Above all, assume that no-one is going to read your paper. What else can you give them instead? A good exec sum? A blog? A killer fact?
  2. It’s really hard to retain anything from reading a piece of research that has no overall narrative, but it is often equally hard for the researcher to identify a narrative that does not do violence to the research. Nevertheless, we have to try – road testing possible narratives ought to be something researchers spend a lot of time doing, especially towards the end of their research project.
  3. Timing: How can researchers adapt their messages to the rhythm of decision-making set out in the policy are we making an impactfunnel. Think about crises: Officials are most open to new ideas immediately after a crisis/scandal or other ‘critical juncture’, so how can researchers spot these windows of opportunity, drop what they’re doing if necessary, and feed ideas to the suddenly interested bureaucrats and pols? It’s not that easy because crises are also when officials/politicians are busiest and most harassed, so there’s no point in just sticking your old report in the post. The key is to cultivate relationships and trust in peacetime, so that when it all kicks off, you can pick up the phone or drop someone an email offering to help.
  4. A distressing amount of academic research on aid treats practitioners as fools, or knaves, or both. Try assuming that practitioners are actually smarter than you are, but don’t have the time to do all that thinking and reading, so you are providing a valuable service. That should help avoid some of the plagues of straw men (‘aid does X’, ‘aid people think Y’, any sentence involving ‘neoliberalism’) which are so crude and silly that you immediately give up on the paper (as in this really annoying example)

Which leaves me with two big questions to ponder

  1. Are funders set up to support this kind of research?
  2. To what extent do academic career incentives encourage/prevent these ways of working?

I fear the answer to both is often pretty negative. Thoughts?

5 comments

  1. Research influence policy? Conflict, aid & development are usual basics — with the natural backdrop of inequalities, oligarchy and patronage politics/fragile or weak governance (with democratic structures & institutions for a facade) — & corruption, abuses & excesses of power for toppings. Policy decisions are short-term personality-centered decrees for self-perpetuation (that assuage rumblings with appeasement thru exploitation of the governed for spoils). Policy research is an Intellectual gymnastics reserved for intellectuals/ academicians/ technocrats. Not negative, just realistic.

  2. Academics have competing incentives that make this sort of engagement challenging, but I think – like in so many sectors/organisations – the main barrier is often time. There’s often just literally too much to do, and teaching, supervision and primary research will (and should) always take the top slots. For ‘incentives’, I’d substitute ‘investment’: are universities willing to invest in the sort of support that academics need to better engage, including having enough of us to create the necessary space? Will the massive GCRF investment be used to support this? I hope so but, perhaps naively so.

  3. Should we not also consider who is doing the research, and why they are doing it? If the researcher starts with a purely academic research question, then crafting a narrative that has relevance to policy making will be very difficult (and possibly unnecessary!!!) If researchers are addressing gaps in policy making, then the research itself will be focused and the policy-relevant narrative so much easier to construct. We may also need to acknowledge the elephant in the room – the hierarchy of research that seems to privilege northern researchers over their southern counterparts, ignoring the now popular activist tenet “nothing about us without us”. Is it not one of the reasons for the “perennial risk of recreat[ing] experiences from one place in another, as if they’re templates”

    I am pleased with the comment that “Statistical surveys alone are not that useful – they can generate false confidence or aversion. Multi-disciplinary approaches can be very helpful – even in helping government break down internal siloes”. Hopefyully we are finally seeing the end of privilieging quantitative data over qualitative and analytical studies.

    Your OXFAM paper must attack the sacred cows of the (largely northern) research community if it is going to be any different to all those other tomes on research to policy!

  4. A very good and potentially useful post, Duncan. Still, I very much liked Wayan Vota’s comment in response to when it was originally posted:

    “This post reminds me of a favorite JadedAid.com card: ‘A research grant to find evidence that evidence-based research for policy makers is used by policy makers to make evidence-based policy.’”

    “It goes to the deeper point that even if a policy maker reads evidence (which it of itself is a massive hurdle), it doesn’t mean they’ll make descisions based on that evidence vs the many other factors they use to make policy.”

    Though there certainly are exceptions, all too many policy makers seek and support research to validate their favored policies and programmes, rather than having policies and programmes flow from the research.

    There also is the problem that research sometimes indicates that not all development problems have development aid solutions. In our “Don’t just stand there, do something” world of development aid, in which jobs, careers, policies and programmes often hinge on insisting that something can be done, this kind of research is all too often rejected or ignored.

Leave a comment

Translate »