Links I Liked

July 4, 2017

Two days with the Radiographers of Power

July 4, 2017

If academics are serious about research impact, they need to learn from advocates

July 4, 2017
empty image
empty image

As someone who works for both Oxfam and the LSE, I often get roped in to discuss how research can have more are we making an impactimpact on ‘practitioners’ and policy. This is a big deal in academia – the UK government runs a periodic ‘research excellence framework’ (REF) exercise, which allocates funds for university research on the basis both of their academic quality and their impact. Impact accounted for 20% in the last REF, which reported in 2014, and I’m told could get an even greater weighting in the next one, in 2021. That means hundreds of millions of quid are at stake, so universities are trying to get better at achieving and measuring impact (or at least looking like they are!) as they start to prepare for the next round of submissions.

And they have a way to go, based on my recent experience of listening to pitches from a range of researchers. Some interesting patterns emerged, both in terms of weaknesses and ways to address them.

First up, many academics fall into the classic mistake of equating frenetic activity with impact. Long lists of meetings attended, papers published, speeches given etc do not impact make. I think they should start from a different place – how would you set about convincing an intelligent, fair minded sceptic of the impact of your research?

Dogbert the quantifier 2That might be a challenge because the way a lot of researchers think about their impact seems pretty rudimentary. Finish the research, publish the papers, run a few seminars, produce a policy brief and (at the less stuffy end) bash out some social media and voila! Unfortunately, ‘blog it and they will come’ is often not a very convincing influencing strategy.

A popular alternative in REF submissions is the adviser/consultancy model – ‘the UN/Government asked me to present/be on a panel/draft their guidelines – now that’s impact’. A bit more convincing, but both approaches would benefit from a more explicit theory of change, running through some of the following issues:

Stakeholder and Power mapping

  • For the change you are advocating, who are the likely champions, drivers and undecided who you can seek to win over
  • Who has the power over the decisions you are trying to influence? They are probably going to be your main targets.
  • Who/what in turn influences those targets? Is it evidence (if so, what kind?), or something else entirely, like the identity of the messenger?
  • How do you get targets to be aware of and interested in your research in the first place (since both decision makers and practitioners live in the land of TL;DR)? Have you tried to involve them in it, for example by asking them to be on a reference group, comment on drafts or be interviewed for it? Much better than just adding your paper to their reading pile.

Dynamics:

Is the change you seek likely to be smooth or spikey – some changes are largely crisis driven, in which case your research-with-impactinfluencing strategy will have to try to get ready in advance, identify and make the most of crises or other ‘windows of opportunity’. As Pasteur said, ‘fortune favours the prepared mind’. This is antithetical to the normal rhythm of research – a steady, high pressure grind of papers and presentations, largely oblivious to events out there in the world.

Insider v Outsider strategy

Insiders prize being ‘in the room’, and see themselves as deeply engaged in policy formulation. Outsiders are more interested in constructing alliances, working with (and providing ammunition for) activists and civil society organizations, and talking to the media. But both need to think through their influencing strategies. On the insider route, are the people in this particular room really the people making the decisions? Which invitations do you say no to (if only to avoid death by consultation)? For outsiders, what are the alliances you need in order to reach and influence the decision makers on this particular issue? In both cases, what kind of research (both in content and presentation) is most likely to get people’s attention?

Attribution:

Both approaches struggle with attribution. According to one insider ‘the ministry is a black box – you’re invited to speak. They say a polite thankyou. That’s it.’ How do you know if you changed anything? The challenges with outsider strategies are different – in an alliance of a dozen CSOs, thinktanks and universities, even if something changes, how do you apportion credit?

The efforts at attribution are currently pretty crude: count the citations and if you think your research has helped someone, you beg them for an email saying just how brilliant you are and how much impact you have had. Better than nothing, but pretty dubious as a sole source of evidence (right up there with job references in terms of rigour….).

Dilbert on quantification of research impactThere’s a lot that academics could learn from the MEL (monitoring, evaluation and learning) teams in the aid agencies here – they have some incredibly smart people working on this. I’ve seen Oxfam’s Effective Reviews really invest in trying to measure impact by constructing a plausible control group for comparison, through propensity score matching, or comparing and ranking different causal factors through process tracing. The academics I’ve talked to seemed entirely unaware of these kinds of methods. Given the multi-million pound benefits of getting it right, there may be a case for universities to outsource (and pay for) this either to ‘impact partners’ such as NGOs with more experience in the area, or to third party specialist organizations that could accompany researchers in design for impact, and then measure the results more rigorously.

Then there’s comms. The default tone is neutral and cautious – goals were to ‘shift’ this or ‘support’ that without ever specifying why or towards what. That feels more like avoiding mistakes than communicating a message. One alternative would be to come up with a clear mission statement – ‘we use evidence to change public opinion so that X happens’. If that’s too close to a credibility-damaging move into overt campaigning, academics should at least be able to clearly identify their USP (unique selling point) – eg ‘I’m the only person in the UK who knows what is going on in these particular far-off places’, and clearly state what their research is about in such a way that normal people and REF panels can understand.

That sounds like an extended moan, but perhaps most striking in the conversations I’ve had is how much impact appears to be taking place anyway – I came away with the impression that a bit more analysis and explicit discussion of assumptions and influencing strategies could be an incredibly productive exercise for Universities seeking to turn research into real impact.

Update: Check out the comments for a couple of fairly fierce pushbacks from people in academia who think I am way out of date on this – wd be interested in other people’s thoughts

14 comments

  1. Nice one Duncan. Just wondering. If academics were indeed serious about research having an impact, they would perhaps be asking a lot of questions *before* they even undertook the research. The way I see it is that you have your research question, null hypothesis, research methodology etc all lined up and then you start asking yourself questions (and of course answering them). Questions like

    Once (& if) I have evidence to accept my hypothesis, what am I going to do with the findings?
    Who would be a consumer and user of these findings?
    What policy / practice / idea / belief would my findings change?
    What would be the scale at which change would happen?
    Is research the right / only / cost-effective way of achieving that change?

    Perhaps then a lot of research, especially the research-for-research sake, would not happen at all.

  2. Duncan – As someone who has worked with you on these issues and as a fan of your blog I admit to being somewhat frustrated by this post. It is a great example of just the sort of critic of academia I wrote about recently for the Times High Education Supplement https://www.timeshighereducation.com/blog/academics-ivory-towers-its-time-change-record . Perhaps as a critical friend of development scholars it is time to change the record? I made the transition from INGO advocacy strategist to research institute communications and impact strategist a few years ago now. Like you I have experience from both sides of the fence but my own analysis seems entirely different to yours. I think the REF is the wrong thing to focus on. Engaged development academics all agree it is a crude tool.

    As for academics needing to learn from charity advocates – the Rethinking Research Partnerships network recently concluded a couple of years of work that brought together INGOs and researchers to look at how we can have more impact. Their conclusions did not chime with your picture of smart NGO types versus bumbling academics. Instead they suggested that we have more in common than we realise and need to persuade donors to create more spaces where we can collaborate. A message you have in the past supported.https://rethinkingresearchpartnerships.com/

    I was also surprised by your suggestion that academics do not have adequate impact evaluation tools but NGOs are great at this. Is this why it is independent academics and their institutions that are brought in to evaluate development programmes that are claiming big results? And as a former campaigner do not get me started on the dicey attribution claims made by NGOs. It was not until I arrived at IDS I fully understood just how simplistic and misleading some of the advocacy evaluation frameworks I had been part of were. Evaluative researchers in places like the Centre for Development Impact really do deserve some recognition. They are at the cutting edge of these debates on theories of change and the tensions that exist between results based approaches and adaptive programming.

    As for communications – I think you rather enjoy perpetuating this out of date image of academics who publish academic papers and then hope for the best. Of course we can all come up with examples of hopeless researchers just as we have all witnessed poor advocacy campaigns. Nonetheless, the broader trend is that research institutions, centres and consortiums have developed highly sophisticated knowledge exchange, communications and impact strategies. They may not be public facing like so many charity campaigns but that is largely down to a thorough understanding of who their audiences are and impact planning (yes researchers do this and development academics have pioneered new methods in this area).

    It would be more productive if you could use your considerable influence to identify this work as advocacy by another name. This term is still toxic amongst academics and research donors which is very frustrating. I believe that the idea that there is a hard choice between advocacy and research quality is a false dichotomy. So rather than sticking the boot in to the poor old researchers (and those of us working with them on impact) perhaps you could consider a message around the need for mutual learning between the NGO and academics sectors and help us make the case for research partnerships focused on progressive economic and social change.

    James Georgalakis, Dir of Communications and Impact at IDS and co-editor of The Social Realities of Knowledge for Development http://www.theimpactinitiative.net/socialrealities

    1. Interesting challenges James. Couple of responses:
      – I’ll change the record when the academics I hear from change theirs! It’s the usual problem, which applies also to INGO influencing and impact measurement – do you look at the average, or the positive outliers? On this occasion, I was pretty shocked at the rudimentary level of understanding of impact, even after all these years of REF conversations.
      – maybe the relative strengths are different between academics and INGOs for coming up with an influencing strategy and measuring impact(and of course you can’t generalise – I’m sure none of this applies to IDS!). My impression though, based admittedly on a handful of conversations, was that the academics in question could learn from NGOs in both areas.
      – of course the REF is the wrong thing to focus on, in an ideal world. But in this world, for many UK institutions (though not IDS), it is an absolutely crucial driver of change. So we most definitely should focus on making sure it does more good than harm.
      – I totally agree that ‘Of course we can all come up with examples of hopeless researchers just as we have all witnessed poor advocacy campaigns.’ and that we should be working together to combine the strengths of both academics and practitioners. My frustration is that it is proving such a long haul on both sides!

  3. Hear, hear James. I similarly felt frustrated in reading the post. The world has moved on, or at least the world I and many of those I know live in. It is no longer academics versus NGOs/practitioners but very much working together, identifying commonalities and comparative advantages, and collectively wanting to ensure rigorous research that will achieve change; and in doing so, recognising the nuances of what impact is and how it can be achieved – including the need to start before the research even begins.

    I’m sure academics can learn from advocates, and vice versa. It would be great if universities had more resources to invest in communications and advocacy specialists who can work together with academics to support throughout the process of the research. Having benefited from such support, I know what a difference it makes – of course academics might not always be placed to take on this role themselves for a variety of reasons.

    If you want to understand the challenges that academics face in reporting impact, try and fill in ResearchFish (but don’t ask me why it is called that) or any of the other reporting that is expected of us. And more generally, understand where we are coming from. So let’s be more careful in identifying the source of the problem and so the solutions, rather than making crude and unhelpful generalisations.

    I might be biased of course, as I work with James on the Impact Initiative: http://www.theimpactinitiative.net/ – but perhaps that gives us more insight into the challenges that researchers face.

    1. Thanks Pauline, this all reminds me of a colleague at Oxfam who says one of the problems she deals with is that in the aid business, 90% of the attention and conversation surrounds 10% of the work, often the best, most innovative work. The other 90% is much more same old, same old, but those of us looking at the 10% think everything has moved on. Maybe you and James are in the 10%, in which case all power to you, but I fear there is a big chunk of the unreconstructed, which is what I saw in my meetings.

      Not quite sure how you read my post as ‘academics versus NGOs/practitioners’ – apart from me being critical, of course. I totally agree that there is a lot we can learn from each other, and we need to get on with it!

      Anything I can read about ResearchFish or other ways academia measures impact? Is the REF using these, or is it likely to continue to throw the ball into researchers’ courts, telling them to come up with their own ways to measure impact?

      1. Hi Duncan – Thanks for the response. ResearchFish is on-line reporting that researchers have to use for reporting to research councils (such as ESRC), with the heading: ‘Leading the world in Research Impact Assessment’ https://www.researchfish.net/why-report
        (there is a link on this page to a review by Kings College but unfortunately I can’t open it on my computer).

        It takes some time to complete (not very user-friendly, and was set up for use in the sciences not all of which translates well for other purposes – it has improved though in response to feedback). It is not entirely clear how some of the information that is requested is used, including for assessing impact. ResearchFish isn’t directly linked with REF, but I guess there may will be an association in some way. The reporting is associated with the Research Council’s definition of impact (conceptual, instrumental, capacity building), but as you indicate, no clear guidance on how to measure this.

        Within universities now there are increasingly ways of trying to capture impact through repositories, and encouragement for academics to be keeping records of this. As you note, this is more appealing to some more than others – but think it is extending beyond the 10% in part because of expectations of doing so.

        I agree with you that the challenge is knowing how to report on impact effectively. Researchers are encouraged to show how their evidence has led to change in policy or practice and so pushed to claim attribution although we know this is problematic.

        It would be great to discuss with you further – and perhaps come up with some proposals that would be helpful for academics and for the REF – with the aim ultimately to promote real evidence-informed change.

        Pauline

        1. The REAL Centre does a fantastic job of sharing their research with practitioners/ entering into discussions with them – the @REAL_Centre conferences are a good example of this. As a practitioner, I find them extremely useful and informative – thank you!

          That said, the post is (in my opinion) relevant as the REAL Centre is an exception to the rule. There are an awful lot of (ed) research projects out there – i.e. iPads in [add remote, impoverished area where the buildings are not electrically grounded here], which almost religiously equate activity with impact, very much to the long term detriment of the wider communities of their research subjects. In the case of the project mentioned above – which has been touring the conference circuit for a straight year and a half to much applause – any attempts at politely suggesting that TCO be taken into account are generally sniffed at and accompanied by a generic remark along the lines of “iPads are a revolutionary technology and – according to our preliminary research – we have reason to believe that they are going to transform education in country X”. This is usually followed by some OLPC bashing which completely ignores the fact that, unlike the iPad, those devices were built for dust and durability. Similarly to the iPad, the cost was generally too high/ the financial model was not sustainable (I’ve never managed to get to that point before having the mic confiscated). It is entirely possible that someone will get another research grant out of it by demonstrating an increase in learning outcomes (since the students in the pilot study went from having no books to using interactive iOS apps daily I’d say this is a given…and coincidentally so do their preliminary research findings :-)).

          Unless Government Donor X and Research Institute Y intend to support said program from now until the end of time there will be an impressive collection of discussion papers, blog posts, PPT slides and frequent flyer points to show for it but no long term impact. This could have been established before the concept paper was drafted if the suggestions in the blog post above had been taken into consideration.
          Unfortunately this is not an isolated example…

          Best,

          Benita

          1. That is very kind of you, Benita.

            I do agree that there is still a lot more to do and you raise important points. My feeling is that there is at least increasing desire amongst researchers to ensure their evidence is useful for policymakers and practitioners but we are all still learning how best to do this. I realise I might be optimistic, but hopeful too.
            Pauline

  4. Duncan: Have you checked out the work of Jonathan Fox’s Accountability Resource Center (https://jonathan-fox.org/) at AU and Lily Tsai’s GovLab (http://web.mit.edu/polisci/research/govlab.html) at MIT? Their websites are in formation so not much information there– but their models of working seem far evolved from your descriptions above — and they have some impressive partnerships with practitioners. I’d encourage you to talk with them, or encourage each to write a piece for your blog.

  5. Hi Duncan. No I don’t think you are out of date. UK academia is huge and there are areas of enlightenment as well as areas of unreconstructed tradition.

    I am a big fan of initiatives that encourage both universities to do more useful research and researchers to do what is necessary for it to be used effectively. REF, despite its flaws, has been contributing to a change in attitudes in universities. There are some interesting development stories to be found on the UKCDS page on the global impact of research (http://www.ukcds.org.uk/the-global-impact-of-uk-research ), and the KCL REF2014 Analysis report identifies some of the factors contributing to impact (http://www.hefce.ac.uk/pubs/rereports/Year/2015/analysisREFimpact/ ).

    While undoubtedly some university academics could learn much from some aid agency MEL teams, the learning could go the other way too. A number of approaches to assess outcomes and impact (such as PSM mentioned in the post) originate from academia where they have been used for decades. Academics also tend to be more cautious when talking about the impact of their work, veering away from being overly optimistic whilst explicitly highlighting methodological limitations; things that NGOs could do more of. I have learnt a lot from my own work with academics in Oxford, Cambridge, Durham, Hull and Hong Kong Polytechnic University working on earthquake resilience in the rather unfortunately named Earthquakes Without Frontiers Project (http://ewf.nerc.ac.uk/ – though the rather out of date website rather reinforces the view that academics do not communicate well).

    The two things that have struck me about this work (which are more or less invisible in the website) are the very long term view that academics take and the very diverse pathways to impact. EWF is a global partnership of researchers who have been working together in various ways and in various places for over 20 years. Many of the senior UK academics’ early students now run earthquake and seismology organisations all over the world. There has been genuine co-analysis and co-production of results over decades, the results of which are now embedded in earthquake resilience systems globally.

    They have of course published a vast number of academic papers which have hugely influenced the discourse. The publications page of the EWF site is impressive and that’s just a partial view from one recent project (http://ewf.nerc.ac.uk/publications/ ). They have also organised and run training courses for a huge number of young earthquake scientists worldwide including one at ICTP Trieste in June 2013 http://indico.ictp.it/event/a12186/material/5/0.pdf ), and high-level conferences bringing together global experts to share global lessons in countries where there is still more interest in prediction than in practical measures to increase resilience (eg in Kazakhstan in 2016 https://www.odi.org/publications/10657-earthquake-science-and-hazard-central-asia-conference-summary ).

    While some of the researchers were initially averse to what they consider rather simplistic instrumental approaches to increase research impact, they are keen to adopt those that they see are helpful to achieving both research and impact objectives. We are currently trying out the GeoHazards earthquake scenario approach in China (see the GeoHazards Kathmandu Valley Earthquake Risk Management page http://www.geohaz.org/kathmandu-valley-earthquake-risk-managem). While not shouting about it over the rooftops (which would no doubt scare the scientists) this approach pretty much embodies all of the principles of transdisciplinary which are increasingly recognised as essential for research impact: co-identification of the problem, collaborative research, co-analysis of the results and co-production of recommendations. And doing this in China with the China Earthquake Administration and the Ministry of Civil Affairs pretty much guarantees immediate uptake of the results into policy and practice.

    We are even planning to develop a theory of change to underpin the next evolution of the project. Though in good academic tradition, there has been a lively debate about whether it is in fact a theory of change or a theory of action.

  6. We need far more sensitive measures than tools like the REF to determine whether research for development has an ‘impact’ on policy and practice. At IDRC we helped develop a tool for assessing research quality that, hopefully, moves the conversation towards metrics other than those traditionally involved in judging the value of academic research. See https://www.idrc.ca/sites/default/files/sp/Documents%20EN/Research-Quality-Plus-A-Holistic-Approach-to-Evaluating-Research.pdf

  7. Hi Duncan. Very interesting post. Are you aware of the research by Maarten Kok from the Vrije Universiteit in Amsterdam on the impact of health research? He has developed both an interesting theory on how research impacts practice and a methodology to study it, called ‘contribution mapping’. https://health-policy-systems.biomedcentral.com/articles/10.1186/1478-4505-10-21
    I am interested in the impact of research at universities of applied science where (at least here in The Netherlands) the impact on practice is the primairy goal of the research while contribution to science is more of a secondary goal. This leads to all sorts of interesting research designs in which increasing impact is a major consideration, including action research and design-based research approaches.

  8. Hi Duncan, I liked your post (clearly behind in the reading). I also know as a research practitioner having moved from academics to UNICEF (a long and strange path) and now the Global Partnership to End Violence that impact happens in different ways. I wanted to call to your attention Sarah Morton’s good work on a multi-country study I have been working on which had lots of very much invisible but potent impact around the world. She managed to measure it! See: https://www.unicef-irc.org/article/1668/ Morton shows how a light development touch and a strong national ownership created institutional normative change. It’s called the Research Contribution Framework. Let me know what you think. And thanks for all your brainiac posts.

Leave a comment

Translate »