Are Academics really that bad at achieving/measuring Impact? Summary of last week’s punch-up
Last week’s post about academics struggling to design their research for impact certainly got a reaction. Maybe not a twitter storm, but at least a bit of a squall. So it’s time to summarize the debate and reflect a bit.
The post annoyed some people in the ‘research for impact’ community, because it was basically saying nothing much has changed. ‘The world has moved on’ said Pauline Rose of Cambridge University, ‘It is no longer academics versus NGOs/practitioners but very much working together, identifying commonalities and comparative advantages, and collectively wanting to ensure rigorous research that will achieve change; and in doing so, recognising the nuances of what impact is and how it can be achieved – including the need to start before the research even begins’. IDS’ James Georgalakis was more pithy: ‘Time to change the record’. Both reminded us that there is a lot going on and (‘fessing up here) I’m not across a lot of it. For example, the Rethinking Research Partnerships network and the Impact Initiative (for whom I’ve even written a piece on academic-NGO collaboration).
But their protestations reminded me of a recent conversation with an Oxfam evaluation guru who says one of the problems she faces is that in the aid business, 90% of the attention and conversation surrounds 10% of the work, often the best, most innovative bits. The other 90% is much more same old, same old. The trouble is that those of us looking at and/or contributing to the 10% conclude, like Pauline, that everything has moved on, when a lot of it hasn’t. It was the shock of being exposed to some bog standard stuff that led to the post – a small, doubtless unrepresentative sample, but perhaps more representative than the view from inside the 10% bubble. For the record, I get the same sense of frustration as James and Pauline when I hear people slagging off NGOs for being unreconstructed fools, knaves or often both, and it’s probably at least partly for similar 90/10 reasons (although it may just be that the work is rubbish).
Several commenters sounded warning bells about language. According to ODI’s Josephine Tsui ‘Whether you use the term advocacy, influence, or engagement, different groups will have an adverse reaction depending on their political situation. However the political process is the same, you have something to say and you need to target who you’re talking to.’
So much for the blog. The twitter traffic was both more pro (lots of people recognizing the depressing portrait painted in the post) and more deeply critical.
Chicago University’s Chris Blattman weighed in with perhaps the deepest critique. In a series of tweets, he said:
‘This is the wrong way to think about research impact. It’s hardly this direct. The best research changes the intellectual conversation. It changes how textbooks are written. It changes what young scholars do next. It changes how undergraduates and MAs learn the discipline. The UK Government obsession with measuring policy impact of research will only help their universities fall behind. Academics will manage what is measured and turn into think tanks rather than make long term investments and take risks. Sad!’
Which reminded me of a painfully brief meeting with DFID’s head of research some years ago: ‘I’ve come to talk about Research for Advocacy’ I declared. ‘That’s an oxymoron’ he replied, ‘you either have research, or you have advocacy, but you can’t do both without contaminating the research’. Someone obviously forgot to tell the REF.
So, should academics deliberately seek to influence policy and beliefs, and try and ‘count what counts’, including the hard to measure stuff, or push back against the whole effort to oblige them to both achieve and measure impact? Just to prove that I haven’t been entirely seduced by academic preferences for nuance and shades of grey, it’s time for a poll
But in the interests of nuance (ahem…), you can vote for more than one