Can we afford the super rich?

October 23, 2015

Links I Liked

October 23, 2015

The Politics of Data – the bit the geeks forget?

October 23, 2015
empty image
empty image

Had a really thought-provoking conversation with Dustin Homer of Development Gateway last week. Development Development Gateway FoundationGateway was originally set up inside the World Bank, then spun off as an independent tech organisation, and focuses on helping governments and international organizations make better use of data in their decision-making.

So far, so technocratic, but Dustin got in touch because he read my piece on the politics of SDG implementation, and it struck a chord. After years of experience, Development Gateway seems to be going through the same process of realization (and adaptation) that Twaweza in East Africa have gone through, when you see that access to data and information alone doesn’t automatically lead to any changes in policy or behaviour. DG is now doing a lot of research about how to promote data uptake, and Dustin had some intriguing initial ideas about what is really going on:

  • A lot of data collection and use is largely symbolic. High officials put pressure on low officials to collect it, then put it all together and present it to as an offering donors. But ‘the enabling environment for data-based decisions is often non-existent’.
  • Too-much-data-300x150That reporting burden may be symbolic, but it is huge, eating up staff time on data collection without leaving time/incentives/skills for useful analysis. Dustin was struck by a conversation with an agricultural extension worker who had two parallel data collections systems – one she had devised herself, which was actually useful in doing her work, and an entirely separate one she had to fill in for the bosses back in the ministry
  • Instead, Dustin has had some conversations along the lines of ‘look, if some big cheese phones up and tells me to do something, I do it, regardless of what the data tells us.’
  • Changing that requires champions at senior level – as Matt Andrews has found with PDIA, that can create a permissive environment that allows mid level technocrats to get on with some data-based policy
  • There are often pockets of data-based management, many of them at local level, where officials seem to have more latitude – and we can probably do a lot more to support them

What emerged from this is a picture of an enormous, multi-billion dollar data machine that has so far been largelyEliot on information supply driven by data providers and lobbyists. There isn’t enough user-centred design (going to ask decision makers what data they actually need to do their job, then helping them devise efficient ways to collect it). Some are calling for direction change toward demand-driven, locally-relevant data solutions, but too often, international reporting mandates win the day. International players often pre-suppose what is needed and offer solutions or systems that meet certain reporting requirements – but may not get used for much else.

Some are trying to push beyond this – for example, the government and local donors in Ghana tried to run the tables and did a gap analysis of its data needs, but they are still trying to rally support (and funding) to do something about it.

All the momentum behind Big Data, data scraping, etc is if anything flying ever higher above the ground level where decisions are actually being made, and if we’re not careful that could come at the expense of local decision-makers. It would be horribly ironic if the move to Big/Open Data saw citizens become mere data generators – the object of data, rather than the subject.

Big DataSo where’s the demand-led, bottom-up data movement? Either at the top – something like the International Growth Centre, which would offer what amounts to a free, top level consultancy to developing country governments, or more radically, some ‘barefoot data’ movement that tries to build a new bottom-up approach to collecting data (but it would have to be data that can be aggregated to the point where it helps government decision makers). Who is the Robert Chambers of data? Would love to see some examples if you have them.

Two more general points:

There is bound to be a trade off between local relevance and comparability, both between places, and over time. How much effort should go into generating beautiful data that is only relevant to one village, and can’t be compared with any others or even with the same village a couple of years later?

Thinking back to the Twaweza example, it seems to me that we have to look much more closely at the arrows in our theory of change diagrams. You know, things like Access to Data → Better decisions. The problems always arise from the assumptions implicit in the arrows – maybe every arrow should have an explanatory footnote in our project documents.

12 comments

  1. Interesting examples in this. I think the huge volume of writing on making M&E useful and getting research into practice/policy helps suggest conditions needed for the data -> good decisions arrow to work – there are a lot of similar themes to those you discuss above (time for analysis, decisions based on politics, intended users being involved in design of data collection systems, etc). I’m currently working on how monitoring data affects actions in healthcare and there are a whole series of steps and conditions eg whether intended users think the data is credible and relevant, ability to interpret the data, knowing how they should respond to the data, having the resources to act on the data… And then another whole set of uncertainties around whether actions in response to the data actually lead to positive outcomes. (Thanks also to Chris for flagging up that article – looks good.)

  2. I am reading E. F. Schumacher’s “Small is Beautiful” at the moment, and I find myself wondering how does the Big Data issue fit within his statement:
    “What it is that we require from the scientists and technologists? I should answer: We need methods and equipment which are:
    – cheap enough so that they are accessible to virtually everyone;
    – suitable for small-scale application; and
    – compatible with man’s need for creativity”
    At the moment it feels like Big Data it is an instrument of scientists and professionals, or those in power, used for their own specialist analyses (or not used at all as pointed out above) which do not lead anywhere. For the most part the data is not accessible to an ordinary person in any practical way. Are we far too clever, without being wise? How about thinking of ways to use “Small Data”, allowing ordinary people to generate their own data in order to improve their lives somehow? I am no expert, but perhaps the specialists could consider this?

    1. Hi Mirjana, we’re doing lots on citizen-generated data here at Civicus via our DataShift initiative which sounds a lot like the small data you speak of. The idea is to build the capacity of civil society organisations to generate and use citizen-generated data to monitor development progress, demand accountability and campaign for transformative change. http://www.thedatashift.org

  3. I don’t disagree with anything in this post, but I’m a little uncomfortable with the concept of “data-based policy” or “data-based decisions”. Data (no matter how “big”) doesn’t tell us anything in isolation: analysis and interpretation is always needed, and those steps are even more challenging even than collecting the data in the first place.

  4. There are lots of interesting points raised in this.
    You note that there is more and more data becoming available, but that this often doesn’t meet the needs of those on the ground who need to make evidence (or data) informed decisions. I think that one of the issues that data is not synonymous with information (or evidence) that can inform policy. Significant resources are often invested in the collection and storing data whether that be (1) the development of databases (2) training people to collect and enter data into these databases (3) big data initiatives to access different types of data. However to make that data useful for those on the ground there is a need for data processing and analysis. This may be quite complex and time consuming (especially when data is not collected as part of a survey, experiment or carefully designed study or when the question that requires an answer needs to draw together different data sources). However in comparison to data collection and storage the resources and attention given to data processing and analysis is often minimal, neglected or ignored. Without this the data is (pretty) useless.
    In addition, you talk about individuals becoming data providers rather than data user as data is funnelled upwards to be used for global (in whatever context we are operating in) analyses, to provide evidence at the global level. I agree that this is a danger . What I think might be is required here, is that data and information (that is processed data and analyses) needs to flow both ways. In addition to global analyses it is important that information is provided at a level that is useful for the data providers themselves. This may be summaries and analyses of their own data, but also analyses that put their data into context. This, in my experience, has added benefits in that the quality of data from data providers is likely to increase as they understand and benefit from it themselves. This flow of data in both directions is not necessarily easy to organise but if it is thought about when data collection systems are being designed could help to ensure the utility of the data at all levels in the information chain.

  5. Great post! The examples you highlight on the symbolic nature of data collection chimes with experiences my colleagues and I have had in NTB province, Indonesia. We found that midwives loyally complete nine paper-based databases (many of which have overlapping entries) that then disappear up the hierarchy. The midwives spend hours completing the forms, distracting them from serving their communities, and derive next to no benefit from the exercise. In response, we are developing a single-entry digital database with the midwives, which importantly, should give them insights from the data they collect. This could be a good research opportunity for anyone interested in the ‘what happens next’.

    The work with the midwives was part of a bigger research/participatory design exercise connected to the idea of data independent villages – communities that generate, interpret and use their own data in better understanding their context and in decision making (hat tip to Mirjana). The work of the Urban Poor Consortium and Solo Kota Kita is instructive in this regard – http://www.urbanpoor.or.id & http://solokotakita.org

    On data innovation for public policy, I find that applied ‘small data’ initiatives, such as crowdsourcing high-frequency food price data (http://bit.ly/1jYcTbY) or urban vulnerability mapping, are more popular with public officials and development programmes than big data projects, many of which don’t have sufficient granularity or targeting at the moment. And in the rare cases where sufficient granularity exists, I agree with Rob, analysis and interpretation is key – a blend of big data and ethnographic research perhaps? I recently listened to Mizah Rahman speak persuasively on this point – https://twitter.com/mizahrahman & http://bit.ly/1MIi72H

  6. The gap you’ve highlighted between ‘access to data’ and ‘better decisions’ is one of the things we at INASP and our partners in the VakaYiko Consortium are exploring in Ghana, South Africa, Zimbabwe, and Uganda. We believe we need to pay more attention to the people and processes that link the data to the decisions. Government researchers and policy analysts are some of the key actors providing this link. Here are two things we’ve heard from them loud and clear:
    1. Individual skills to analyse evidence are important. The African Union points out in its STI Strategy for Africa that “many of the officials involved in or responsible for drafting policy documents do not have the necessary skills or training” in evidence based policy making. And capacity building for use of evidence is one of NEPAD’s priority capacity building areas for the continent. So far we’ve trained research staff from 18 government institutions in 2 countries in skills for evidence informed policy, and we hear consistently from them that data collection, analysis and communication are areas where they need further support.
    2. There are basic issues around internal information systems. The researchers we’ve been working with (mostly from ministries and parliaments) tell us that often other departments don’t want to share their information, or there are lengthy bureaucratic systems which slow down the flow of information from one department to another. There are also a host of IT issues, ranging from no backup of information, to limited use of shared servers for internal information sharing, to lack of data analysis software, limited resources for printing, and poor internet connection. Practicality is crucial, as is taking an approach which emphasizes connections and networks between different parts of the system.
    The Consortium has been exploring various ways to support the organisational processes as well as the individuals that link information to decision makers. We’re sharing some of our reflections and lessons here: http://www.inasp.info/en/work/vakayiko/publications/ and welcome comments and feedback from others working on similar issues.

  7. A bottom-up data movement is an alluring vision, and as World Vision we have been scaling up support for citizen monitoring, and spotting the opportunity for aggregated data on the quality of service provision at the local level being pieced together to create a national picture. Citizen monitoring reaches those places that national statistics cannot reach, and our experience is that decision makers really value it. However, there is an inherent tension between allowing citizens to define themselves measure of quality (evidence shows that social accountability initiatives in which communities themselves identify quality criteria are more effective) and requiring standardised measures that will enable aggregation. This is one reason why a project we started on developing a database that will capture and aggregate citizen scorecard outputs has been tricky to develop, so would value any experiences others have had on squaring this circle!

    1. Hi Daniel, we’ve been starting to grapple with some of the same issues at Civicus via our DataShift initiative (http://www.thedatashift.org). We’ve not done much on harmonisation/comparability yet but this will become an increasing priority for us in 2016 as we try and link the citizen-generated data projects we are supporting (and others) with efforts to monitor the SDGs. We’ve been in touch with your colleague Besinati Mpepo relatively recently and are looking forward to hearing more about your citizen scorecard database in due course. In the meantime do join our mailing list so we can keep you posted on our efforts on this subject: http://bit.ly/1kkTH98 and do get in touch if you’d like to know any more about what we’re up to. What team are you with at WVI? Hopefully there’s scope for us to do something collaborative on this at some point soon!

  8. I just had a conversation yesterday with a leader of a non-profit organisation that mediates between the local government, the private sector and non-profits in a village that was devastated by the 2011 tsunami in Japan.

    The crisis the tsunami brought left people with nothing to lose, and that has helped break the silo walls among these three sectors.
    The government wanted to provide a policy and regulatory environment conducive for local businesses, but they don’t know what is needed. The local business has a will and skills to do something but are not sure where to start.

    They learned from experiences of the demand-driven data project in New Orleans after Katrina and adapted it to their situation, which helped them formulate reconstruction build-back-better plans.

Leave a comment

Translate »