How will we know if the SDGs are having any impact?
As long time readers of the blog will know, I’ve been a Sustainable Development Goal (SDG) sceptic since long before they were even agreed. However, I’ve been hearing a fair amount about them recently – people telling me that governments North and South, companies and city administrations are using them to frame public commitments and planning and reporting against them. So maybe it’s time to take a second look.
My problem with the SDGs is not with the subject matter – all very creditable – or the number of targets, which bugs some people, but not me. It’s the design, and in particular, the lack of analysis on how they could have political impact. For me, the key question for any international instrument ought to be traction – will this or that convention, undertaking etc (of which there are hundreds, if not thousands) influence the day-to-day behaviour of governments, sub national bodies, private sector or others and how? And yet weirdly, this was never raised during the design of the SDGs, which instead were dominated by adding more topics to the Christmas Tree of SDG issues, and long discussions about indicators, data and metrics.
Even odder, this question has barely been asked of their successor, the Millennium Development Goals. In a classic fudge of causation
and correlation, the MDGistas said ‘look, extreme poverty has halved, the MDGs are a success!’ Yet much of that reduction was down to China, and no-one can credibly claim that it was the MDGs that got the Chinese Communist Party out of bed every morning to transform the Chinese economy.
There are a few exceptions I’ve managed to collect in response to repeated rants: Alice Evans showed how MDG5 on Maternal Health prompted the Zambian government to take action for fear of reputational damage; May Miller Dawkins wrote a nice paper on what the SDGs could learn from international environmental agreements in terms of design/traction. Moizza Binat Sarwar studied MDG implementation in five low and middle income countries (Indonesia, Liberia, Mexico, Nigeria and Turkey), largely by interviewing staff of relevant ministries. When Columbia University’s Elham Seyedsayamdost surveyed 50 countries’ implemention of the MDGs, she found that whether the goals were reflected in plans or not, they did not have any apparent influence on how governments spent their money. All great stuff, but hardly a body of research that matches the importance of the issue – a very odd lacuna indeed.
Now the same thing looks to be happening again. Lots of talk of monitoring and reporting against the SDG indicators, and as far as I can tell, no attempt to establish whether the SDGs (rather than other factors) are responsible for changes to those indicators.
So what would an ‘SDG politics watch’ look like?
First the data: we need to know which bodies are using the SDGs in their planning, budgeting, reporting etc. Maybe we could crowdsource this – governments, NGOs, academics etc can all upload evidence to some kind of WikiSDG so we can start to build up a picture of formal commitments
But that is nowhere near enough – how to distinguish between lip service and genuine traction? And why do the SDGs get traction in
some places and not others? That’s where the academics could get stuck in. Alice Evans at Cambridge is issuing an open invitation to supervise a good PhD student on this topic. In her words ‘It’s not just about whether governments use the SDGs in their policy documents, but their attitudes towards these goals: do they dwell on issues they previously didn’t because they are concerned about regional benchmarking? Do civil servants in meetings spend time looking at SDG 9.ii (whatever that is), and put pressure on provinces to improve this indicator, or just whizz through it. Do parliamentarians raise these issues?’
Why does this matter? Because understanding when/how an international instrument has traction ought to be the starting point for tweaking SDGs, and designing future instruments that work better. If for example, we were to identify regional rivalry as a key driver in state action, we would put much more emphasis on reporting in regional league tables. If the key to impact is civil society picking up and using the instruments, then (as in the SDGs) it becomes even more pressing to involve CSOs in monitoring and reporting, as well as initial design. And so on.
Previous rants on this topic include one paper and lots of blogs: