What kind of Evidence Influences local officials? A great example from Guatemala

I met Walter Flores at a Twaweza seminar in Tanzania a couple of months ago, but have only just got round to Walter Flores coverreading his fascinating paper reflecting on 10 years of trying to improve Public Health in Guatemala. It is short (12 pages), snappily written, with a very crisp, hard-hitting thesis, so no need to do more than provide some excerpts to whet your appetites:

The Setting: ‘In Guatemala, because of decentralization, local public services are governed primarily by municipal governments and the Ministry of Health’s local and regional branches. This means that local authorities are actually in a position to address some issues of service quality, corruption, and abuse—though not deeper systemic issues, such as health budgets.

The Project: Over the last decade, the Centro de Estudios para la Equidad y la Gobernanza de los Sistemas de Salud (the Center for the Study of Equity and Governance in Health Systems, or CEGSS) has considered the question of how to use evidence to influence authorities and promote participation by users of public services in rural indigenous municipalities of Guatemala.

The Process: Our initial approach relied on producing rigorous evidence through the surveying of health care facilities using random samples. However, when presented to authorities, this type of evidence did not have any influence on them. In the follow-up phases, we gradually evolved our approach to employ other methods to collect evidence (such as ethnography and audiovisuals) that are easier to grasp by the non-expert public and the users of public services.

We found that evidence collected, analyzed, and systematized by the users of the health system was key to engaging the authorities. This conclusion was based on a systematic analysis of different methods for gathering evidence CEGSS used to document the conditions and user experience of local health services.

Between 2007 and up to now, we have implemented five different methods for gathering evidence:

1) Surveys of health clinics with random sampling,

2) Surveys using tracers and convenience-based sampling,

3) Life histories of the users of health services,

4) User complaints submitted via text messages,

5) Video and photography documenting service delivery problems.

Each of these methods was deployed for a period of 2-3 years and accompanied by detailed monitoring to track its effects on two outcome variables:

1) the level of community participation in planning, data collection and analysis; and

2) the responsiveness of the authorities to the evidence presented.

Our initial intervention generated evidence by surveying a random sample of health clinics—widely considered to be a highly rigorous method for collecting evidence. As the surveys were long and technically complicated, participation from the community was close to zero. Yet our expectation was that, given its scientific rigor, authorities would be responsive to the evidence we presented. The government instead used technical methodological objections as a pretext to reject the service delivery problems we identified. It was clear that such arguments were an excuse and authorities did not want to act.

Flores fig 1Our next effort was to simplify the survey and involve communities in surveying, analysis, and report writing. However, as the table shows, participation was still “minimal,” as was the responsiveness of the authorities. Many community members still struggled to participate and the authorities rejected the evidence as unreliable, again citing methodological concerns. Together with community leaders, we decided to move away from surveys altogether, so authorities could no longer use technical arguments to disregard the evidence.

For our next method, we introduced collecting life-stories of real patients and users of health services. The decision about this new method was taken together with communities. Community members were trained to identify cases of poor service delivery, interview users, and write down their experiences. These testimonies vividly described the impact of poor health services: children unable to go to school because they needed to attend to sick relatives; sick parents unable to care for young children; breadwinners unable go to work, leaving families destitute.

This type of evidence changed the meetings between community leaders and authorities considerably, shifting from arguments over data to discussing the struggles real people faced due to nonresponsive services. After a year of responding to individual life-stories, however, authorities started to treat the information presented as “isolated cases” and became less responsive.

We regrouped again with community leaders to reflect on how to further boost community participation and achieve a response from authorities. We agreed that more agile and less burdensome methods for community volunteers to collect and disseminate evidence might increase the response from authorities. After reviewing different options, we agreed to build a complaint system that allowed users to send coded text messages to an open-access platform.

We also wanted to continue to facilitate communities to tell their stories and experiences. Instead of presenting life-Flores fig 2stories as text, we began helping communities to use photography and video to document their stories. Audiovisual evidence proved a powerful method to attract the interest of traditional media and other civil society organizations. Also, by using coded complaints sent via text messages and sending electronic alerts and follow-up phone calls to authorities, we were able to draw attention to service delivery problems in real time. This situation resulted in a “high” level of both community engagement and government responsiveness from officials.

And the big takeway conclusion? In contrast to theories of change that posit that more rigorous evidence will have a greater influence on officials, we have found the opposite to be true. A decade of implementing interventions to try to influence local and regional authorities has taught us that academic rigor itself is not a determinant of responsiveness. Rather, methods that involve communities in generating and presenting evidence, and that facilitate collective action in the process, are far more influential. The greater the level of community participation, the greater the potential to influence local and regional authorities.’

Great stuff. The big omission, for me, is any discussion of why this is the case. Why do officials find testimonies more convincing than hard data? Is it because they prefer a human narrative, or because these are the voters who can influence their political masters? More research needed, as ever.

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our Privacy Policy.

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.

Comments

14 Responses to “What kind of Evidence Influences local officials? A great example from Guatemala”
  1. Chris A

    Interesting, thank you. Another question on context and ‘portability’: what circumstances and context enable these outcomes, and to what extent might we expect different outcomes from similar interventions in Uganda for example? Also, were there political changes over those years that could partially explain increased responsiveness of government?

  2. Sam

    If gov’t officials respond to less rigorous academic methods, yet these methods provide a more accurate picture of what is going on, then it would seem that the solution would not be to abandon the latter for the former. That seems like a dangerous path, perhaps one that much of development work was on previously. Instead, one possible ay to think about this is that one must complete the rigorous study and then after the fact collect easier to understand evidence that aligns with the results.

    • Walter Flores

      Dear Sam. Thanks for your comment. In my paper, I do not suggest to abandon “rigorous academic methods”. I describe our experience in situating “evidence” within a specific social and political context. I also explain that our two outcome goals were community participation and responsiveness from authorities. In the specific context of rural indigenous communities of Guatemala with high social exclusion and inequities, evidence generated through rigorous academic methods did not have significant effect on the two outcome indicators. Also, the interest of our accountability action is not on measuring traditional health outcomes (mortality, morbidity) but in monitoring access barriers (discrimination, illegal payments, opening hours of facilities and more) with the purpose of eliciting action and responsiveness by local and regional level authorities, Do we need rigorous academic methods to monitor these kind of access barriers? Also, in the experience that I describe in my paper, generating evidence is not an end on itself but just the starting point to open channels of engagement between users of services and government officials. I invite you to read our most recent paper which describes in detail the role of evidence and accountability within the wider goal of social inclusion and the democratic governance of public services. (https://opendocs.ids.ac.uk/opendocs/bitstream/handle/123456789/13685/IDSB492_10.190881968-2018.133.pdf?sequence=1&isAllowed=y) I would be happy to answer any more questions. Regards

  3. Walter Flores

    Duncan, thanks for referring my paper. About the question why officials find testimonies more convincing than hard data, some explanation is within the text but maybe not explicit, so I will explain it briefly here. The reason is because of collective action by user of services. It is very different that one single person present himself/herself before a health official with a demand than a group of delegates representing many communities. As I explain in page 11 of the paper, community defenders employ a diverse range of civic action tactics to negotiate and resolve service problems with the authorities. This has included publishing evidence in the media and displaying it at public exhibits, street demonstrations, requests to parliamentarians, reporting cases of abuse and corruption to public prosecutors, and calls for observation from
    the official human rights ombudsman. Communities mobilize around data that they understand and are able to collect and analyze by themselves. But the two reasons you mention are also true: these empowered citizen are also voters and human stories communicate better. But a good story or voters may not go far without the support of “collective power”. My co-author and I explain this in detail in a new paper just published a few weeks ago: https://opendocs.ids.ac.uk/opendocs/bitstream/handle/123456789/13685/IDSB492_10.190881968-2018.133.pdf?sequence=1&isAllowed=y

  4. Mary Morgan

    Walter, don’t you think that the ministry of health folks are not moved by hard data– i mean who is! We have to look at Guatemala’s context of how many years of a genocidal war, then narcos move in and there is not much stability at the community level where minimal social cohesion exists.

    What you are doing is awesome because in your research process you are empowering people to tell their stories, to present them in a way that is effective to all and others will hear the stories and something is touched in them. Heck the ministry of health people know that the system is broken, and the stories will resonate and speak to them because their neighbor or family member or they themselves have been impacted by such poor health service delivery.

    Presenting the facts in a way that touches the hearts and minds of the readers- whoever they are, is the goal of the evidence. And let’s face it, number crunching sometimes just doesn’t get most people motivated to do something to influence systemic change because it is numbers and not a story that reveals the complexity of the human experience.

  5. Annette Fisher

    it’s an interesting question that you raise Duncan. I read Walter’s paper a while back and found it compelling and also matched evidence from elsewhere. A project I worked on in Pakistan found that evidence generated by the community themselves was at first considered by the health department to be of no use as it wasn’t collected by formal government monitors. But after the communities were able to show the health officials that the evidence complimented their government data, monitored access indicators which the government were not monitoring and revealed gaps in service delivery because the community was monitoring from the user perspective – this led to certain government officials (in one province) taking up the data and adding some of the community monitored indicators to their own dashboard.
    I am currently doing research in Gujarat in India on exactly this topic, can community-generated data have a powerful role in the accountability process? I’ll update when I’m further through the research!

  6. Varja

    Thanks Walter and Duncan – I have seen this paper presented by Walter and really appreciate it. Two points and one question to add.
    I am more of a skeptic when it comes to picking sides on the qual vs. quant divide of what is more persuasive. With any type of data of course astute, compelling, targeted communication is key. But I think the process of how evidence generated by / from communities is heard (and accepted, and possibly acted on) by government authorities can occur along different trajectories. In Tanzania for example, civil society organizations for years produced plenty of well-communicated and poignant qualitative evidence (stories, testimonies, etc.) on the dismal quality of education in public schools — and yet it could be dismissed as just a compilation of anecdotes, particularly when contrasted with government-produced “hard numbers” on exam results and other statistics (leaving aside the credibility of gov data). It took Uwezo – i.e., the generation of independent, nationally and district-representative (so pretty granular), quantitative (i.e. “scientific”) data on the actual levels of learning to force the point, publicly, that while certain indicators can look good (e.g. enrollment), it doesn’t mean that children are actually learning much in schools. And of course it was attacked primarily for the method – being a citizen-implemented learning assessment. But it persisted for years (from 2009 until present), consistently (and reliably, as evaluated by several independent efforts) showing poor learning outcomes, in spite of the millions being poured into the education sector. It took a long time, but eventually the relevant ministries admitted to a learning crisis, and began to discuss it publicly, looking for solutions. In this case, the journey began with anecdotal stories, but what influenced government was the solidity of the independent quantitative data. Both the Guatemala story and the Tanzania story of course have no counterfactual, but my hunch is that data / evidence (of any kind) often get dismissed initially — and that it’s the combination of various kinds of evidence and the persistence of it that has the biggest chance of swaying opinions / attitudes of authorities. Persistence is key — many politicians (bureaucrats) will try to just ride out a storm; only when it repeatedly hits them and refuses to go away will change begin to take shape.
    The other point which Walter’s paper makes is that of enabling community representatives to speak for themselves. This is impressive, and hard to do well — from the (old) literature on participation we know it can go from the worst – manipulation – all the way to “citizen controlled.” I would love to learn more about this particular process: how was it developed, who led it, how were decisions taken, was it also dismissed at any point from the authorities, how did the communities adjust to this, etc.? Is there something from CEGGS I can read about this?
    Last but not least – I didn’t quite get from the paper (perhaps I missed it?) — how did the local and regional authorities respond to the community evidence and pressure? In other words, what were the effects in terms of either service delivery, and/or accountability?

    • Walter Flores

      Varja, thanks for your comments. I totally agree with you that it is not useful to discuss whether quanti or quality is better. In my paper, my argument is that evidence should be situated within the social and political reality in which we are implementing our actions. Once we are submerged in that reality, then we can start thinking strategically about what is needed and what method, process or tool would be more useful for our purposes. In your example of Tanzania, you describe clearly a social and political reality in which producing large quantitative databases and evidence was needed and allowed you to advance. We also must remember that the experience I describe in my paper is with officials at local and regional government. It well may be that national level officials are moved or influenced through different methods. We started working with national level authorities over a year ago and are now closely monitoring the interactions, results and methods. I also agree with you that to influence officials and decision makers, different methods and types of evidence are needed-not just a single one. And we are exactly seeing this situation with officials at national level. Our current strategy together with communities, is to back up the life stories we have with results from national surveys. For instance, we have life stories about how illegal payments become a major barrier for pregnant women requiring emergency care. The national survey on social and economic conditions have several indicators on families that faced barriers to access services. Our argument to officials at national level is that as indicated by the survey, there are several thousand families in rural areas of Guatemala facing similar barriers. So far we have succeeded in generating interest from some Parliamentarians who want to know more about it and ways to address such barriers. Hopefully we will be able to report back, in a couple years, about the situation with national level officials. In terms of the results of these actions for service delivery and accountability, you are right that these results are not discussed in the paper. A summary of the results in terms of improving service delivery and democratic governance is within this recently published paper: https://opendocs.ids.ac.uk/opendocs/bitstream/handle/123456789/13685/IDSB492_10.190881968-2018.133.pdf?sequence=1&isAllowed=y This article also describes in some detail, your questions about how this experience was developed, who led it and how were decisions taken. Regards

      • Varja

        Thanks Walter for the comprehensive reply and links to further materials. I’m very much in line with your thinking, and would love to talk more about the linking between the participatory / granular / persuasive data generation and national-level priorities and interests — an approach Twaweza is really keen on!

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.