Gloria Sikustahili
Julie Adkins
Japhet Makongo
Simon Milligan

How to Monitor and Evaluate an Adaptive Programme: 7 Takeaways

I generally try to avoid ‘inside baseball’ aid discussions that make sense mainly to practitioners, but this piece by Gloria Sikustahili, Julie Adkins, Japhet Makongo & Simon Milligan was so interesting and sensible, I made an exception.

We’ve all been there. We’ve drowned in the weight of programme documentation; the need to capture everything, to report everything, to be seen to be held accountable for all our actions or inactions. Yet on other occasions we’ve all sighed with exasperation that the programme we’re tasked with supporting has very little to help us understand what’s happened along the way, and why decisions were made.

So how do we strike the right balance? Whose needs are we meeting? And how do we better negotiate and prioritise the needs of different documentation users?

The DFID and IrishAid-funded Institutions for Inclusive Development programme in Tanzania – a “test tube baby” on iterative programming and adaptive management – explores new ways of tackling  wicked problems in a variety of areas, such as solid waste management, inclusive education, and menstrual health. As a programme designed to be agile and opportunistic, it needs to have fast feedback loops so that staff and partners can make informed decisions about if and how to adjust strategy and tactics during implementation. It must also be accountable for its performance and generate lessons about how change occurs.

Solid Waste Management: The problem is messy, and so are the solutions

So, as the programme enters its final months of implementation, what lessons have we learned about M&E in an adaptive programme in terms of what, when and how to document and for whom? Here are our seven takeaways:

  1. All parties benefit when a genuine attempt is made to understand not only the programme theory of change, but also the basis for certain actions during the adaptive process. It pays to build rapport with donors and reviewers; to understand their interests, needs, concerns and motivations. While this might not necessarily lead to fewer requests, it can lead to more streamlined responses; responses that ‘tick the box’ first time around. Annual reviewers and mid-term evaluators also need to understand the interests of implementers and the programme managers rather than focusing only on donors’ interests. This means reviewers should take time to understand the programme’s journey to date and future trajectory, while balancing the interests of both parties is a necessity. All too often we, as human beings, find ourselves believing certain things of our partners, be they donors, reviewers, implementers, partner agencies or others. We can project our own insecurities, values, beliefs, assumptions and prejudices onto others, and can react negatively when faced with “another silly demand for yet more documentation”.
  2. Utilisation is key and this needs to shape decision-making about what and when to document, and for whom. In much the same way as Michael Patton champions Utilization-Focused Evaluation, programmes should adopt a similar principle, i.e. that the need to document should be judged on its likely usefulness to its intended users, including donors, reviewers, managers, implementers and partners. Without agreement and clarity about purpose, intended user and relative costs, programmes end up documenting for documentation’s sake. But with actions come consequences. Excessive time spent ‘feeding the beast’ is time spent away from working towards an intended outcome.
  3. Documentation needs and demands need to be negotiated and balanced: there is an ever-present risk of prioritising the documentation needs of funders and reviewers at the expense of managers and implementers, and doing so can be counterproductive. Don’t get us wrong: we aren’t saying this is a binary choice. Yet, the reality is that those working hands on, at the front line, make decisions in real time and do so against a backdrop of uncertainty and incomplete or implicit understanding.

By contrast, managers and implementors are often involved in slower, more structured processes. This meeting of two related, yet different realities creates an environment in which different needs, expectations and demands must be successfully navigated, even negotiated. Senior managers, such as team leaders, often find themselves acting as a buffer with the donor. Inevitably, unclear, excessive or competing expectations about what must be documented creates uncertainty, bias and inefficiencies.


All information users – donors, reviewers, implementers and partners – should distinguish what must be documented and reported, from what might be documented and reported and from what they would like to see rather than love to see documented and reported in an ideal world (thanks to outcome mapping for this prompt). As a general rule of thumb, key information should be synthesised and summarised as headlines. Brevity in documentation forces clarity of thought and aids the production of ‘formal’ reporting (annual & semi-annual reports, case studies, and even blogs!) when required.

4. To aid programme performance, donors must make choices and recognise the consequences of their signals. We acknowledge that all parties (funders, managers and implementers) require a certain level of documentation for accountability and sense-making purposes. Programmes have many constituencies or stakeholders, not least the donor agencies, and each party have their own needs and interests in documentation.

Of course, where programmes are under-performing, close scrutiny is expected and necessary. However, staff within donor agencies hold positions of power. General queries or requests for information can be construed as demands – demands that often require the time-consuming compilation of documentation without a clear rationale and can result in airbrushed content which overlooks the messy realities. Unfiltered lists of questions, requests and comments from a variety of donor staff to a report, for example, can tie up implementers as they seek to make sense of, justify and explain, resulting in a ping pong to-and-fro which might have better resolved out over a cup of coffee. A confident programme with confident, capable and experienced staff can push back, but this takes time and trust, and neither come over night. 

5. Reflection and documentation are two related, important, yet different things. I4ID, like many agile programmes, is built on experimentation. Staff value reflection but the process must be shaped by a desire to improve performance. To borrow from Graham Teskey’s recent blog post, purposeful reflection allows implementers to reach a decision about specific workstreams (i.e. this is what we are going to do from here). And, that reflection-and-capture need not translate to extensive documentation. Yet, challenges arise when donors and reviewers seek to make sense of often messy realities, a number of months after the event. This can lead to a situation in which events and decisions are fully documented not because it is valuable to implementation but to deal with possible future need and demands. The answer? Understanding, keeping line of sight on what matters, and accepting ‘good enough’.

The most successful staff capture-and-reflection platforms are those founded on real-time discussion and action. Early efforts in I4ID to document key events on a weekly basis using a Word-based template and again on a monthly basis using an Excel-based dashboard stagnated within 12 months. By contrast, those platforms that thrived at I4ID – the Monday morning meetings, staff WhatsApp groups, and the Quarterly Strategic Reviews by the programme team, donors and some invitees – were founded on real-time exchange of reflections and ideas. This suggests that staff respond more favourably to live, interactive platforms, not impersonal capture and storage. These platforms should be well organised and managed to avoid biases and defensiveness.

6. Synthesis occurs most readily where discussions are well structured. Discussion is great, yet it must lead to something. For that to occur, use should be made of three key questions in core events (e.g. weekly meetings, strategic reviews) and associated minutes: What? So what? Now what? For example, what has happened in the operating environment over the last month, what programme effects have we seen, and what lessons have we learned about how change occurs? So, what does that mean for us and specifically, if/how our programme can best support the reform agenda? Now what should be done and by whom, not least before we next meet? Objective discussions require good facilitation to avoid bias and defensiveness. 

7. Donors should consolidate and localise oversight functions wherever possible. The decision to have multiple levels of oversight – the donor agencies themselves, the annual reviewers appointed by the donors to verify the claims made by a programme, and the external evaluators appointed by the donors to capture lessons identified by the programme – creates a living organism that has many needs and expectations. Sometimes these are aligned, other times not. So, it is right to ask, at what point can oversight functions (e.g. an external Results and Challenge team that produces annual reviews, and a separate external Mid-Term and End-of-Programme-Evaluation team) be consolidated or streamlined? Donors should ensure that local institutions are included in the evaluation and reviewer teams to get local perspective on the process and results, build capacity among local stakeholders and promote adaptive programming locally.    

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our Privacy Policy.

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.

Comments

9 Responses to “How to Monitor and Evaluate an Adaptive Programme: 7 Takeaways”
  1. Rob Cartridge

    Thanks for a very insightful summary of takeaways from an adaptive programme. I was struck by the need to differentiate between learning and documentation. The requirement to document does not always come from donors but from a desire to share lessons with others, avoiding duplication of mistakes. Whilst this may be less urgent than adapting the programme at hand, it can also be important. We need to think about how to resource it without creating a heavy admin system that lapses. Having dedicated resource to share lessons beyond the programme may be the answer?

    • Simon Milligan

      Hi Rob. Thanks for this. You’ve touched on an interesting issue. Wholeheartedly agree with the importance of sharing ideas and lessons. However, I’ve often been struck by three things. First, is the challenge of determining when one’s offerings are credible enough – informed enough – to draw attention to them, to invite discussion of them, etc. Put differently, when is something ‘good enough’ to take beyond the trusted inner circle? (Graham Teskey’s beautifully written blog – see above – is well worth a read).
      Secondly, what does that need for a filter – an internal check – say about our sector or ‘business’? Can we be too adversarial? Is there space for multiple perspectives and early impressions if the machine favours certainty and clarity? And on a slightly different front, does the incentive to speak to the latest fad undermine the need to talk through less glamorous issues?
      Finally, I’m agnostic, perhaps even sceptical, about the value of or value placed on social media. The sheer volume of information (and insights?) can be overwhelming. Indeed, I fear there may be a tendency for people to talk/shout at or past each other. So do we place sufficient attention on listening and engagement? As a fan of cooking (and eating), I sometimes think the notion of slow food vs fast food could be applied to engagement. So, slow engagement anyone?

  2. Nikki Zimmerman

    Great points! In my experience, the point about trust between donor and partner is incredibly important and underlies all the points here. Without trust, accountability remains the overarching priority and doesn’t give learning and potential adaptability room to breathe or mature.

  3. Kerry Selvester

    Great blog from our neighbours! We are working on urban programme for women economic empowerment in Mozambique and I second the challenges you highlight. Adding to your takeaways we also found the notion of ‘just-in-time’ analysis useful; we would analyse and present in accessible PPT format, the M&E data available at the time of the regular six monthly reflection meetings, to guide the what? so what? and now what? session – handing this over to the operations team put the power back in their hands – your data, your decisions. We also consciously separated the facilitated reflection meetings from the decision making/resource allocation meetings to preserve the safe space for tough talks on what really works and what doesn’t.

  4. Masood Ul Mulk

    I simply do not understand how you can have a single M and E in the development world. You will need two. One has to do with the world of the donors, the log frames, the result agendas and the theories of changes etc; the other is the day to day reality which the implementer have to work every second of the project. This deals with relationships, the frowns, the superstitions, the jealousies and envies one has to work with to make the project go ahead. They require people of different knowledge and skills. In both there would be monitoring and evaluation but of different kind. One to meet the acccountability requirements of the donors and the other to deal with the reality on ground and the accountability to those whose lives the project touches. Make the M and E of the donors world deal with the daily wranglings that the implementers undergo and the learning going on there, they will come out very confused people

    • Good to see you name, Masood; I just corresponded with Maqsood & Siraj:) I understand what you are getting at, but I do feel that MonEv need be synergized between the donors and the implementors. As you well know, the fact is that even donors may be heterogenous, in that they could be representing a conglomeration of funders, and staff could be from different countries inherently having varied sociocultural and experiential backgrounds. Implementors, on their part, may also be a non-homogenous group from different socioeconomic, cultural, linguistic, religious and educational backgrounds. Ideally, both “groups”, if you will, need to engage in a process to better understand each other so that any project activity–conceptualisation, implementation, MonEv….–could be carried out collaboratively as seemlessly as possible. All sides must be or become aware of the process through which the particular project need be executed. If one side focuses on certain aspects, the other side on different aspects, there would rarely be a holistic understanding of what, why and how it is being done. To optimize success of an effort, the process is as important as the result; and if the process be disjointed or “siloed”, then the whole project is destined to also be fragmented.

  5. Masood Ul Mulk

    I agree to your point there is a need for collaboration. I began my work with M and E and ended up managing organisations. The one abiding lessons even from highly successful projects was M and E needs to have these two parts which cater for these different needs within projects and stakeholders. There is no reason why they do not cooperate but its important to understand the different perspectives

  6. Thanks for this – I think it really is difficult to escape the tension between what is needed for ‘prove’ and ‘improve’ agendas in adaptive programmes – this is a really valuable set of takeaways. The idea that the platforms that thrived the most were those were founded on real-time exchange of reflections and ideas particularly resonated. I’ve been part of a team (along with Adam Kessler, Aly Miehlbradt and Hans Posthumus) who have been working on developing a more practical ways to assess ‘system change’ – that frequently referred to but too often nebulous beast. The lessons that came out of that work very much complement those here. Our conclusion, after talking to many practitioners across multiple programmes, is that measuring system change is more do-able that most people think – but as is pointed out here, documentation can become onerous if there isn’t a bit of a triage system based on different documentation users’ needs (love that phrase!). For anyone interested, an overview is laid out here – https://beamexchange.org/resources/1334/ – and more detailed ‘how to’ guidance will be published soon.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.