How do developing country decision makers rate aid donors?
Had a last minute cancellation of today’s post – ah Oxfam sign off, doncha love it? So here’s the most read new post from the last year.
Brilliant. Someone’s finally done it. For years I’ve been moaning on about how no-one ever asks developing country
governments to assess aid donors (rather than the other way around), and then publishes a league table of the good, the bad and the seriously ugly. Now AidData has released ‘Listening To Leaders: Which Development Partners Do They Prefer And Why?’ based on an online survey of 6,750 development policymakers and practitioners in 126 low and middle income countries. To my untutored eye the methodology looks pretty rigorous, but geeks can see for themselves here.
Unfortunately it hides its light under a very large bushel: the executive summary is 29 pages long, and the interesting stuff is sometimes lost in the welter of data. Perhaps they should have read Oxfam’s new guide to writing good exec sums, which went up last week.
So here’s my exec sum of the exec sum.
The setting: ‘Once the exclusive province of technocrats in advanced economies, the market for advice and assistance has become a crowded bazaar teeming with bilateral aid agencies, multilateral development banks, civil society organizations and think tanks competing for the limited time and attention of decision-makers.
Development partners bring an increasingly diverse set of wares to market, including: impact evaluations, cross-country benchmarking exercises, in-depth country diagnostics, “just-in-time” policy analysis and advice, South-South training and twinning programs, peer-to-peer learning networks, “engaged advisory services”, and traditional technical assistance programs. Yet, we know remarkably little about how the buyers in this market – public sector leaders from low and middle-income countries – choose their suppliers and value the advice they receive.’
The Findings: Sorry DFID/USAID, but ‘Host government officials rate multilaterals more favorably than DAC and non-DAC development partners on all three dimensions of performance: usefulness of policy advice, agenda-setting influence, and helpfulness during reform implementation. The Global Fund to Fight AIDS, Tuberculosis and Malaria, the GAVI Alliance, and the World Bank rank among the top 10 development partners on all three of these metrics.’
The Old Boys network is alive and kicking: ‘Host government officials who have previously worked for a development partner usually regard their policy advice as being useful’. Perhaps connected to that, the fear of China sweeping the rest of the aid business aside appears overblown (see fig).
Listening more to developing countries gets more results than forcefeeding them through ‘technical assistance’ programmes: a big econometric data crunch found that: ‘Alignment with partner country priorities is positively correlated with the extent to which development partners influence government reforms. This finding suggests that when development partners put the country ownership principle into practice, they usually reap an influence dividend. [Whereas] Reliance upon technical assistance undermines a development partner’s ability to shape and implement host government reform efforts. The share of official development assistance (ODA) allocated to technical assistance is negatively correlated with all three indicators of development partner performance.’
All good stuff, but I really had to dig to extract these messages. And the report misses the biggest of all tricks – where is the league table? If there’s one thing that’s guaranteed to get the attention of policy makers, it’s finding that they are languishing at the bottom of a table of their peers. The data gathered here could easily be combined to produce an overall table of how different aid providers rank in the eyes of their recipients across a number of factors.
A quick exchange of emails with an understandably unhappy (with this review) AidData established that they did in fact produce a league table after all. But it’s on page 82 of the appendices, under the title ‘Appendix E: Supplemental information.’ At this point, the report starts to look like a classic comms case study, and not in a good way.
So here’s the top 20 on what for me is the most interesting questions. British readers please note, DFID doesn’t make the cut – it’s at 31, 35 and 40 in the 3 columns.
The good news is that AidData is planning similar exercises in 2016 (any update?) and 2018. Let’s hope they sort out some of the teething troubles with their comms (I’m sure Oxfam would be happy to help), so that they make the most of a great idea, a huge amount of hard work and some brilliant data.
More coverage in the Washington Post.