Links I Liked

February 27, 2018

5 common gaps and 4 dilemmas when we design influencing campaigns

February 27, 2018

What makes Adaptive Management actually work in practice?

February 27, 2018
empty image
empty image

This post by Graham Teskey, one of the pioneers of ‘thinking and working politically’, first appeared on the Governance Soapbox blog graham-teskey

It’s striking how important words are. USAID calls it Adaptive Management, DFAT calls it Thinking and Working Politically, DFID calls it Politically Informed Programming, and the World Bank just ignores it altogether.

More seriously – what is at issue here? At heart, I would argue that this agenda – TWP, DDD or even PDIA – means four things:

  • being much more thoughtful and analytical at the selection stage (thinking about what is both technically appropriate and what is politically feasible);
  • being more rigorous about our theories of change (how we judge change actually happens in the sphere in question) and our theories of action (how and why the interventions we propose will make a difference);
  • our ability to work flexibly (meaning to respond to changing policy priorities and contexts, and by adapting implementation as we go – changing course, speeding up or slowing down, adding or dropping inputs and activities, changing sequencing etc.);
  • and our willingness and ability actively to intervene alongside, and support, social groups and coalitions advocating reform for the public good.

It is third characteristic that this blog is about. Working flexibly.

It is the flexibility of TWP-informed programming that usually attracts most attention. Many words are used interchangeably and uncritically: flexible, responsive, adaptive, agile, nimble etc. As noted above TWP emphasises responsiveness and adaptation. In the programs that I have worked on recently, it is adaptation that poses the major challenges to TWP: the ability to change course as implementation proceeds.

In discussions, the simple answer often given is that we remain wedded to the project framework and the annual plan and budget because, well, that is what we do and that is what the donor wants, and it’s important not to miss our spending target or drift off plan. I think this answer is clear, simple and wrong. Let me try to explain why.

One way to iterate - from 'Building State Capability'

One way to iterate – from ‘Building State Capability’

When looking at issues of organisational change, public service reform and ‘capacity development’ it is now commonplace to structure the analysis in terms of three ‘layers’ or ‘levels”: the individual, the organisational and the institutional. We have known this for twenty years or so. At the individual level staff need to be trained and skilled, and with appropriate tools, to do the job. At the organisational levels appropriate systems, structures and procedures need to be in place. And at the institutional level ‘the rules of the game’ have to incentivise a culture of performance and results. In most evaluations of public service reform or case studies of organisational change, we have found that it is one thing to help strengthen individual competencies and improve organisational structures and systems, but it is another thing altogether adequately to address the institutions that incentivise performance. We have learned that turning individual competence into organisational capacity requires institutional change.

But in implementing TWP I think this is turned on its head: currently there are many incentives in place for TWP. Donors say they want it – implementing partners certainly want to. But the constraints are at the organisational and individual level. The reason is that adaptation in program delivery requires four functions to be delivered simultaneously:

  • implementation: the day-to-day, week-to-week task of delivering activities (how are we doing on physical progress?);
  • monitoring: the regular and frequent checking of progress towards achieving outputs (are we on track against the plan, the budget – and most importantly – against outputs and possibly outcomes?);
  • learning: our internal and reflexive questioning of progress – what are we learning about translating inputs and activities into outputs and outcomes (what is working and what isn’t?); and
  • adapting: revising our implementation plan, adding unforeseen activities and dropping others, changing the balance of inputs, be they cash, people or events etc. (how are we changing the plan?).

Only if we ‘learn as we go’ can we adapt in real time: this requires delivery (implementation), data collection (monitoring), learning (reflection) and adapting (changing) to be undertaken simultaneously, not sequentially. And it is here I believe that we run into severe constraints at the organisational and individual levels.

At the organisational level, the development industry has got into the bad habit of bracketing the two very different complexity signtasks of monitoring and evaluation: development practitioners are programed to say “monitoringandevaluation” all in one word, as if the two are actually one. Only recently has L (learning) been added – but added to ‘MandE’ to form MEL. The result in project management is that the responsibility for monitoring is handed over to structurally separate functional units far removed from operational delivery and implementation. Staff responsible for delivery say “monitoring is nothing to do with me”. And of course MandE staff tend to evaluate ex post, rather than in real-time.

At the individual level, it is hard to imagine implementation staff with the skills and competencies (let alone the time) to undertake the four functions noted above. The skills required for efficient and effective implementation against a plan and a budget are not the same as the skills for assessing progress, analysing what has worked and why, and having the experience and judgement to know which parts of the plan need adapting and in what direction – all in real-time.

The answer seems pretty straightforward: break up MEL by allocating monitoring and learning responsibilities to implementation teams; increase their resourcing by building implementations teams with multiple skill sets and competencies; insist on regular and robust internal review and reflection exercises (at least monthly); clarify precisely the level of delegated responsibility and authority for adaptation to be given to implementation teams; and negotiate all this with the donor and the partner government.

Or is this answer clear, simple and wrong too?


  1. “For every complex problem there is an answer that is clear, simple, and wrong.”

    With each piece that I read on this topic, many of which are from this blog, I’m finding myself getting increasingly frustrated, and feeling like this whole line of thought has gone down a cul-de-sac. We’ve been saying the same things for quite a while now, but things haven’t really changed: our development isn’t that different; our thinking and working not that much more political; our programmes not hugely more flexible nor adaptive.
    On one had this might mean that the process of change across the industry is ongoing, and so we need to keep up with the message – strong and stable! – and I have some sympathy with this, as there’s no little evidence that the words donors speak aren’t matched by practice. Or it might mean that these truisms, even if they are to some extent reasonable, justifiable and right, are not enough.
    I have some thoughts:

    Old Dog, Old Tricks
    There’s an unspoken assumption that programmes were before rigid, technocratic and uncontextualised, and now on the other side of the Rubicon are the polar opposite. I suggest that programmes and interventions were pretty much as adaptive and politically aware then as they are now – it’s basically impossible not to have been – and/or that even if there’s more talk of progressive ideas and practice now, it’s not born out in the reality.
    To some extent what did we expect? You can’t have all but a few variables remain exactly the same, and expect that by injecting some good intentions and a lot of writing this was going to change a whole huge system and its embedded culture. Development institutions are, by and large, political bureaucracies. Their cultures and practice need to be understood in this context, and I think a recognition that these aren’t going to change much or fast, so we need to adapt within this.

    The Low Politics of High Ideals
    Extending from this, I think that a large part of the development world believes, explicitly or implicitly, that what we do is apolitical. I think this is not only dead wrong, but also flies in the face of the idea of ‘thinking and working politically’. If we’re not doing it ourselves, and doing it as far, high and wide as possible, I think we’re failing to be realistic about what we do, how we do it, and therefore how to do it well.
    Here I would suggest that this is both Politics and politics. It’s about completely understanding how things work at all levels – Whitehall, ministries, country governments, communities and many more. We are effectively in the business of attempting to improve societies, which are formed of and run by people. People are not easy: as much as we want them to be rational agents (thanks again economists!), evidence from absolutely everywhere suggest that they aren’t. People tend to respond to emotion rather than evidence; they are easily swayed; they are attracted by easy wins; they don’t have time or energy to think about all problems and solutions. Yeah sure, that’s not you, but you know who I’m talking about.
    If this form of understanding isn’t at the forefront of what development practitioners do EVERY DAY, we can’t expect things will work.

    Too Big to Not Fail
    What is a programme? It’s lots of things: an idea; a Theory of Change; a vehicle for values; a bureaucratic entity; a thing valued at an amount of money; a way of changing the world. What it’s not is perfect, or a plan for delivery that should be stuck to.
    I say it’s too big to not fail, but actually it’s too big, too small and too short. It’s too big because actually programmes end up almost always being collections of smaller projects, which tend to be much more successful because it’s easier to work with smaller geographies: less people, you can see the boundaries of systems; practically more doable; managers can actually hold the whole thing. But rarely do they cohere back to a single entity that forms a consistent whole (other than in the beautiful fictions of the annual report). They’re too small because even a programme of £50m is a drop in the well of a national economy. They’re too short because the lofty ideals they aim for are generational.
    I think that what underlies this is that we are all involved in a process of attempting to shift the world towards a set of accepted norms and values. For me it’s not about making Utopia – it seems dull, and isn’t likely, based on the entirety of human history – but to certainly remove the worst excesses, to iron out gross inequities, and to have a basic framework of values that allows our messy existence to happen within. If you don’t even think that there are a basic set of shared values I’m going to ask for the bill.
    And if we do that, then the programme starts to make sense: it’s an expression of this somewhat lofty endeavour, but within which we can deliver real benefit and change at a local level, and we just have to accept the bureaucratic burden this comes with. It is for the leaders in the development sector to make and defend this case.

    It’s the Smart, Stupid
    I return sometimes to DFID’s Smart Rules, which I think were the best recent attempt to get an organisation to change its culture and foster better programming. I take two things from this, the Rules themselves and what they led to. Firstly, is the vein of responsibility that runs through them – yes there’s systems, and processes and bureaucracy and managers and logframes, but at some point someone’s got to make decisions, and everything should work towards them being as good as possible. And this links to point two, which is that people seem to have focused a lot more on ‘Rules’ than ‘Smart’.
    Smart, for me, is about taking all the rules and systems and, even if briefly, putting them away and being able to say ‘what are we trying to do, and how do we do it?’ I should say there are plenty of people I’ve met who embody this very well, but as a colleague says ‘culture eats strategy for breakfast’ and this is still the exception. Of course you still have to fill the forms and please the bosses and ministers – as I say, recognising the politics and the bureaucracy is essential.

    So, to answer Graham’s question, it’s definitely not just about doing MEL better. And the World Bank did some amazing work on exactly this in the 2015 World Development Report.

  2. Graham’s point that the bundling of Monitoring Evaluation Learning isn’t helpful rings true for me. MEL is becoming a preferred approach to understanding the performance of development activities. To my mind learning is the newcomer to the MEL model, added to provide impetus to the existing objective of making use of conclusive findings after the M&E process, and to the new objective of making use of provisional findings during the M&E process. This reflects a level of dissatisfaction with the impact of M&E investments on current and future planning and strategy.

    So, on the surface this bundling sounds like a good thing? We have robust methods for monitoring and evaluation and by adding learning we create a more useful performance assessment approach. But is there a danger that by adding the L to M&E we have somehow missed the opportunity to more critically reflect on the challenges of traditional M&E approaches? This is especially the case when at the same time, evaluation itself is becoming increasingly complicated, alien, and costly. For example through drives for ever greater analytical rigour and external validity through the application of Theory of Change and Randomised Controlled Trial approaches.

    It would seem that what is happening is that learning is being adopted as a cure all for the uptake shortcomings of M&E, whatever the methods used. A more critical perspective would ask to what extent the practice and culture of mainstream M&E supports or hinders individual, group, and social learning?

    Perhaps the opportunity at hand is as Graham suggests, to separate out evaluation from monitoring and see how a learning approach to monitoring could better support adaptive management during implementation.

    Monitoring, let’s face it, is not very fashionable. Isn’t it about accounting for inputs and outputs using simple and largely quantitative measures (costs, units, throughput, etc)? Compared to evaluation, there are few career paths or awards for monitoring professionals. This may be the case, but when we look at what the implementers of development activities do and care most about themselves within the scope of MEL, it is monitoring. That’s because it is most often driven by internal and positive incentives (what is going on right now that I need to know?) and used most rapidly in management and real time decision making. Evaluation is often by contrast seen as being driven by external or at least independent actors, for accountability to directors and funders, with low internal incentives and sluggish use in management.

    So, what if monitoring could get a much needed boost of interest and innovation and be seen as a good thing in itself for adaptive management, planning and strategic thinking, and much less from the evaluation perspective as primarily a source of data for more sophisticated methods of analysis? Moreover, might monitoring practice re-invigorated by organisational learning methods be more appropriate for the kinds of development programming that are becoming mainstream (innovation, challenge funds, global funds, etc)?

    To put it another way, are not design and management approaches like Problem Driven Iterative Adaptation, Prototyping, Thinking and Working Politically, Fail Fast, Agile, etc not pointing to the need for more learning orientated evaluation practice, but rather challenging the whole idea that development activities are something that should be conceptualised or (as is often the case) designed as objects that are evaluable. There is I think too much of the evaluation tail wagging the design dog in development, with the content and process of activities being shaped by what can be rigorously and validly analysed.

    Here are seven features we might look for in re-invigorated monitoring approaches
    1. Check for emergent inputs and outputs (outside of the design scope)
    2. Bias evidence capture toward on outliers and over performers
    3. Look for patterns and holes
    4. Analyse evidence streams in real time
    5. Talk about findings continuously
    6. Iterate the design as events emerge
    7. Work out loud to encourage others to amplify understanding

  3. Agree very much on putting M and L into the responsibilities of those doing the implementing. Evaluation of contribution to outcomes needs to be independent and planned at outset. Even more difficult to pull off is marrying all that to a requirement to forecast monthly spend, and be accurate enough at least over calendar and financial year. Difficult but not impossible…I hope.

Leave a comment

Translate »