How can we rate aid donors? Two very different methods yield interesting (and contrasting) results

Two recent assessments of aid donors used radically different approaches – a top down technical assessment of aid quality, and a bottom up survey of aid recipients. The differences between their findings are interesting.

The Center for Global Development has just released a new donor index of Quality of Official Development Assistance (QuODA), with a nice blog summary by Ian Mitchell and Caitlin McKee.

‘How do we assess entire countries? One way is to look at indicators associated with effective aid. The OECD donor countries agreed on a number of principles and measures in a series of high-level meetings on aid effectiveness that culminated in the Paris Declaration on Aid Effectiveness (2005), the Accra Agenda for Action (2008)and the Busan Partnership Agreement (2011). Our CGD and Brookings colleagues—led by Nancy Birdsall and Homi Kharas—developed QuODA by calculating indicators based largely on these principles and grouping them into four themes: maximising efficiency, fostering institutions, reducing burdens, and transparency and learning.’

QuODA’s 24 aid effectiveness indicators were then averaged to give scores to the 27 bilateral country donors and 13 multilateral agencies.

I’d pick out two findings from this exercise:

Big is (on average) better – there’s some kind of line of best fit that suggests aid quality is higher for governments that commit a higher % of their national income. But there are outliers – New Zealand is small but beautiful; Norway is big and ugly.

The best performers are a mix of bilaterals and multilaterals, although there’s a cluster of multilaterals just below the kiwis at the top

But there’s another, completely different, approach – ask people on the receiving end what they think of the donors. AidData have been doing this for years, so I took a look at their most recent ‘Listening to Leaders’ report.

The report is based on a 2017 survey of 3,500 leaders (government, private sector and civil society) working in 22 different areas of development policy in recipient countries.

‘Using responses to AidData’s 2017 Listening to Leaders Survey, we construct two perception-based measures of development partner performance: (1) their agenda-setting influence in 64 shaping how leaders prioritize which problems to solve; and (2) their helpfulness in implementing policy changes (i.e., reforms) in practice. Respondents identified which donors they worked with from a list of 43 multilateral development banks and bilateral aid agencies. They then rated the influence and helpfulness of the institutions they had worked with on a scale of 1 (not at all influential / not at all helpful) to 4 (very influential / very helpful). In this analysis, we only include a development partner if they were rated by at least 30 respondents.’ Sadly New Zealand (top on QuODA) didn’t make the cut in the AidData analysis.

On this exercise, the multilateral organizations clean up, with the US the top-rated bilateral donor at number 8 on helpfulness (the much criticised European Union comes in at number 5).

I would love to hear views on why this might be. Off the top of my head, a couple of possible explanations:

  • Multilateral organizations are on average bigger, so have more presence in the lives of people on the receiving end
  • The surveys were measuring different things – aid quality v support for policy reform
  • Multilateral organizations may do better on the soft stuff – technical assistance, but also partnership, dialogue etc

Thoughts?

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our Privacy Policy.

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.

Comments

5 Responses to “How can we rate aid donors? Two very different methods yield interesting (and contrasting) results”
  1. David Grocott

    Different questions, different answers.
    As for why multilaterals come top in the AidData polling…maybe it’s because:
    – Multilaterals tend to be less…discerning with how they spend their money, making them more ‘helpful’ than bilateral donors (so long as the govt signals their support for whatever reforms the multilateral in question is interested in – see below).
    – Multilaterals have a greater ability to punish those who refuse to be ‘influenced’, which makes them more influential in ‘shaping how leaders prioritize which problems to solve’. Ever been on the end of a negative Article IV consultation?

  2. Gideon Rabinowitz

    Hi Duncan, great blog. Fascinating that the much maligned multilaterals turn up at the top of both of these rankings. I think it is relevant to highlight that research (e.g. looking at the drivers of aid allocations) suggests that multilateral aid is less driven by the narrow political agendas of donors. This imay be partly because in operating at greater arms length from donor politics they can take a bit more risk and deliver more closely in line with aid effectiveness principles. A good example of this is the EC, who still do lots of budget support, almost as if EU member states have outsourced such operations to them.

  3. My vote goes to AidData. This is qualitative research that matters. And we shooudl take note as donors that perhaps we could do more supporting the UN for passing the messages on human rights and governance, health, education.

    They could be stronger in squaring the findings with data. I should dig into their methodology more, it seems to me it is really not reasonable to put different kinds of donors in the same bag. It even looks like concerning influence per dollar, it might be that small donors end up actually getting more bang for their buck. I wonder whether it would be different if loans and grants would be regarded differently.
    I like also how their document oozes prudence. They spell out the caveats.

    The Quality of Aid document is very quick in jumping to conclusions. The title claims to measure Aid Quality, but different indicators are proxy indicators at best. Goodhart’s law in action. Really, the size of a project is a measure of its efficiency? In our office, small projects are now relegated to the fringe, as they are not efficient. Governance is now a nono for a few years, as the budgets are too low. Could there be a link with the shrinking space? … Dumping money in poor countries is a focus on the poor?

    As bad indicators drive out good indicators, donors can get their marks higher faster with gaming badly conceived indicators than complying with the good ones. It seems to me the QUODA data should be triangulated with qualitative information before letting it loose on the public. Moreover, the basis is the OECD indicators, which would lead not to expect indicators that make it difficult for bureaucracies.

  4. Rinus van Klinken

    Is there a rating for how donors support and practice adaptive management? That might say a lot about the quality of aid.
    That Leaders like the multilaterals might have something to do with political economy: they do indeed tend to support policies and reforms. But the ones of the technical kind, not necessarily the more difficult and political ones.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.