Measuring Barriers to Big Data for Development

How can we measure the barriers to big data for development?  A research paper from Manchester’s Centre for Development Informatics suggests use of the design-reality gap model.

Big data holds much promise for development: to improve the speed, quality and consistency of a wide variety of development decisions[1].  At present, this is more potential than actuality because big data initiatives in developing countries face many barriers[2].

But so far there has been little sense of how these barriers can be systematically measured: work to date tends to be rather broad-brush or haphazard.  Seeking to improve this, we investigated use of an ICT4D framework already known for measurement of barriers: the design-reality gap model.

In its basic form the model is straightforward:

  • It records the gap between the design requirements or assumptions of big data vs. the current reality on the ground.
  • The gap is typically recorded on a scale from 0 (no gap: everything needed for big data is present) to 10 (radical gap: none of the requirements for big data is present).
  • The gap can be estimated via analysis of researchers, or derived directly from interviewees, or recorded from group discussions.
  • It is typically measured along seven “ITPOSMO” dimensions (see below).

As proof-of-concept, the model was applied to measure barriers to big data in the Colombian public sector; gathered from a mix of participant-observation in two IT summits, interviews, and secondary data analysis.
WP62 Graphic v2

As summarised in the figure above, the model showed serious barriers on all seven dimensions:

  • Information: some variety of data but limited volume, velocity and visibility (gap size 7).
  • Technology: good mobile, moderate internet and poor sensor availability with a strong digital divide (gap size 6).
  • Processes: few “information value chain” processes at work to put big data into action (gap size 7).
  • Objectives and values: basic data policies in place but lack of big data culture and drivers (gap size 7).
  • Skills and knowledge: foundational but not specialised big data capabilities (gap size 7).
  • Management systems and structures: general IT systems and structures in place but little specific to big data (gap size 7).
  • Other resources: some budgets earmarked for big data projects (gap size 5).

A simple summary would be that Colombia’s public sector has a number of the foundations or precursors for big data in place, but very few of the specific components that make up a big data ecosystem.  One can turn around each of the gaps to propose actions to overcome barriers: greater use of existing datasets; investments in data-capture technologies; prioritisation of value-generation rather than data-generation processes; etc.

As the working paper notes:

“Beyond the specifics of the particular case, this research provides a proof-of-concept for use of the design-reality gap model in assessing barriers to big data for development. Rephrasing the focus for the exercise, the model could equally be used to measure readiness for big data; BD4D critical success and failure factors; and risks for specific big data initiatives. …

We hope other researchers and consultants will make use of the design-reality gap model for future assessments of big-data-for-development readiness, barriers and risks.”

For those interested in taking forward research and practice in this area, please sign up with the LinkedIn group on “Data-Intensive Development”.

[1] Hilbert, M. (2016) Big data for development, Development Policy Review, 34(1), 135-174

[2] Spratt, S. & Baker, J. (2015) Big Data and International Development: Impacts, Scenarios and Policy Options, Evidence Report no. 163, IDS, University of Sussex, Falmer, UK

Steering e-Government Projects from Failure to Success

How do you turn a relatively unsuccessful e-government (or ICT4D) project into a relatively successful one?

There’s not a lot of guidance on this question.  Lists of success and failure factors are generic rather than specific to any one project, and need to be analysed before the project starts.  Evaluation methodologies focus more on impact than implementation, and generally apply only after the project has ended.

What is needed is a “mid-implementation toolkit”: something that will both analyse where you’ve got to in the project, and recommend an improvement action plan for the future.  Researchers working alongside an Ethiopian e-government project have recently published the results of testing just such a toolkit.

Using the “design-reality gap” framework, the researchers gathered data from four different stakeholder groups involved with the e-government project, which had introduced a land management information system into one of Ethiopia’s city administrations.  The system was only partly operational and was not yet fully integrated into city administration procedures: it could therefore be described as a partial failure.

The design-reality gap framework helps measure any differences that exist between the project’s initial design expectations and current implementation realities.  It does this along seven dimensions (see figure below).

Where large gaps are found, these highlight the key and specific problem areas for the project.  In this particular e-government initiative, significant design-reality gaps were identified in relation to:

  • Management systems and structures (a failure to set up an ICT department and to hire permanent IT staff).
  • Staffing and skills (hiring only five of the required nine IT staff, and undershooting the necessary qualifications and experience).
  • Project objectives and values (allowing some culture of corruption to remain among lower-level administrators).
  • Information systems (absence of one core system module and of digitised documents).

These gaps demonstrated that the e-government system had not yet institutionalised within the city government.  The gap analysis was therefore used as the basis for a discussion with senior managers.  From the analysis and discussion emerged two things.

First, identification of small gaps that had lain behind the partial success of the system – the commitment of project champions, process re-design being conducted prior to introduction of new technology, and stability in the information that was digitised onto the e-government system.

Second, identification of an action plan that would close the main extant gaps between design and reality: creating the proposed new ICT department, hiring additional IT staff, and setting up permanent positions with clearly defined salary scales and promotional criteria. These, in turn, would provide the basis for implementing the missing module, and scanning the missing legal documentation.

Not all the gaps can readily be closed: it will take a much longer process of cultural change before the last vestiges of corruption can be eliminated.  Nonetheless, design-reality gap analysis did prove itself to be a valuable mid-implementation tool.  It is helping steer this e-government project from partial failure to greater success.  And the authors recommend its use by e-government managers as they implement their projects: it has helped to focus management attention on key e-government project issues; it digs beyond just technical issues to address underlying human and organisational factors; and it offers a systematic and credible basis for project reporting and analysis.

Feel free to comment with your own experiences of design-reality gaps, or other mid-implementation techniques for e-government project analysis and improvement.

Can a Process Approach Improve ICT4D Project Success?

Many ICT4D projects fail[1].  There are various mooted reasons for this, of which I will highlight five here:

  • Failure to involve beneficiaries and users: those who can ensure that project designs are well-matched to local realities.
  • Rigidity in project delivery: following a pre-planned approach such as that mandated by methods like Structured Systems Analysis and Design Methodology, or narrow use of LogFrames.
  • Failure to learn: not incorporating lessons from experience that arises either before or during the ICT4D project.
  • Ignoring local institutional capacities: not making use of good local institutions where they already exist or not strengthening those which could form a viable support base.
  • Ineffective project leadership: that is unable to direct and control the ICT4D project.

This does not represent an exhaustive list of causes but one can find one or more of them in many failed ICT4D projects.  And they are deliberately selected because – if we turn them around to their mirror-image project enablers – they become the five key components of the “process approach” to development projects: beneficiary participation; flexible and phased implementation; learning from experience; local institutional support; and sound project leadership.

The process approach arose during the 1980s and 1990s as a reaction to the top-down, “blueprint” approach[2].  The blueprint approach was particularly associated with use of foreign technologies in rural development projects.  Perhaps, then, it is no surprise that it has filtered through into ICT4D practice.

Equally, though, one can see elements of the process approach in action in successful ICT4D projects:

  • Beneficiary participation: the M-PESA mobile finance project in Kenya incorporated the views of users into project design through user trials and volunteer focus groups.
  • Flexible and phased implementation: India’s agricultural information kiosk project, e-Choupal, used a pilot approach for all new services; introducing them one-by-one and planning designs and scale-up on the basis of those pilots.
  • Learning from experience: Grameen incorporated the lessons from its microfinance projects into the design and delivery of its Grameen Phone programme of rural mobile telephony.
  • Local institutional support: Brazil’s community computing project, the Committee to Democratise Informatics, is founded on the development of local institutional capacity through each of the schools it creates.
  • Sound project leadership: returning to M-PESA again, Vodafone put skilled project managers in place in Kenya in order to make the project work.

Each one of these projects – and one can no doubt find many others within the ICT4D field – demonstrates more than one of these five elements.  This is not unexpected since the process approach can be understood not as five rather arbitrarily-categorised, separate components but as an integrated whole.  It can be conceived like a wheel (see figure below[3]): flexible, phased implementation being the tyre that absorbs the bumps as the project goes along, feeding contextual information to learning from experience: the central axle from which the spokes of participation, local institutions and leadership radiate, giving strength to the whole.

 Figure 1: The ICT4D Process Approach Wheel

The process approach also reconceives the notion of success in ICT4D projects.  Instead of seeing either success or failure as cross-sectional, final judgements on a project, instead – like a point on the rolling wheel – any judgement must be seen as contingent and passing.  Instead of success and failure, we would therefore talk of multiple “successes” and “failures” as the project proceeds.  Any overall judgement would rest on relevance of the ICT4D solution, opportunities for capacity building, and sustainability.  A process approach contributes to each of these.

And for ICT4D practitioners, a process approach can help pose questions:

  • What is the role of beneficiaries throughout the project’s stages?
  • What is the mechanism for changing direction on the project when something unforeseen occurs?
  • What is the basis for learning on the project?
  • What local institutions can be used for project support?
  • What is the nature of project leadership?

And so forth – these and other questions can lead to concrete plans, schedules and roles which incorporate the lessons of the process approach into future ICT4D activities.

This blog entry is a summary of the online working paper “Can a Process Approach Improve ICT4D Project Success?“, published in the University of Manchester’s Development Informatics series.

If you have experiences of ICT4D project failure or success to share, please do so via comments.


[1] Good data on success/failure of ICT4D projects is embarrassingly limited, and more historical than recent.  See: “Information Systems and Developing Countries: Failure, Success and Local Improvisation

[2] A foundational paper is David Korten’s article “Community Organization and Rural Development: A Learning Process Approach

[3] Source: Bond, R. & Hulme, D. (1999). Process Approaches to Development: Theory and Sri Lankan Practice. World Development, 27(8), 1339-1358

Evaluating Computer Science Curriculum Change in African Universities

Effective use of ICTs in Africa requires a step change in local skill levels, including a step change in ICT-related university education.  Part of that process must be an updating of university computer science degree curricula – broadening them to include ICT and information systems subjects, moving them from the theoretical to the applied, and introducing modern teaching and assessment methods.

International curricula – such as those provided by organisations like the IEEE and the ACM – offer an off-the-shelf template for this updating.  But African universities are going to face challenges in implementing these curricula, which were designed for Western (typically US) rather than African realities.  And when curriculum change is introduced, African universities and Education Ministries need a systematic means to evaluate progress, to highlight both successes and shortcomings, and to prescribe future directions.

A recently-published case study – “Changing Computing Curricula in African Universities: Evaluating Progress and Challenges via Design-Reality Gap Analysis” – investigates these issues, selecting the case example of Ethiopian higher education.  In 2008, Ethiopia decided to adopt a new IEEE/ACM-inspired computing curriculum.  It moved from three-year to four-year degrees, introduced a new focus on skills acquisition, more formative assessment, greater diversity in teaching approaches, and a more practical engagement with the subject matter.

Most literature and most advice about changes to ICT-related curricula has tended to focus on content rather than process.  As a result, there has been a lack of systematic guidance around the implementation of curriculum change; particularly in relation to evaluation of change.

In the Ethiopian case, the design-reality gap model was brought into play since it has a track record of helping evaluate ICT-related projects in developing countries.  The explicit objectives and implicit expectations built into curriculum design were compared with the reality found after implementation.  This enabled assessment of the extent of success or failure of the change project, and also identification of those areas in which further change was required.

The gaps between design and reality were assessed along eight dimensions – summarised by the OPTIMISM acronym, and as shown in the figure below.

Using field visits to nine universities and interviews with 20 staff based around the OPTIMISM checklist, the evaluation process charted the extent to which the reality – some 18 months after the curriculum change guidance was issued by the Ministry of Education – matched the design objectives and expectations.

The evaluation found a significant variation among the different checklist dimensions, as shown in the figure below. 

For example, the new curriculum expected a combination of:

  • Specialist computer classrooms to support advanced topics within the subject area, and
  • General-purpose computer classrooms to teach computer use and standard office applications to the wider student body.

Yet in most universities, there were no specialist computing labs, and ICT-related degrees had to share relatively basic equipment with all other degree programmes.

Similarly, the spotlight focus of curriculum change on new student skills had tended to throw into shadow the new university staff skills that were an implicit design requirement for change to be effective.  The evaluated reality was one in which a largely dedicated and committed teaching community was hampered by the limitations of their own prior educational experience and a lack of computing qualifications and experience.

But progress in other areas had been much better.  The national-level environment (milieu) had changed to one conducive to curriculum change.  Formally, two new Educational Proclamations had been issued, supporting new teaching methods and new learning processes; and two new public agencies had been created to facilitate wider modernisation in university teaching.  Informally, Ministry of Education officials were fully behind the process of change.

Similarly, university management systems and structures had been able to change; assisted by the flexible approach to structures that was particularly found in Ethiopia’s new universities, and by a parallel programme of business process re-engineering within all universities.

Evaluation using the design-reality gap model was therefore a means of measuring progress, but it was also a means of identifying those gaps that continued to exist and which needed further action.  It thus, for example, led to recommendations of ring-fencing a capital fund for technology-related investments; some redirection of resources from undergraduate to postgraduate in order to deliver the necessary staffing infrastructure; and a reconsideration of some curriculum content to make it more Ethiopia-specific (in other words, changing the design to bring it closer to local realities).

There were challenges in using the design-reality gap model for evaluation of curriculum change: allocation of issues to particular OPTIMISM dimensions, and drawing out the objectives and expectations along all eight dimensions.  Overall, though, the model provided a systematic basis for evaluation, one that was assuredly comprehensive, and one through which findings could be readily summarised and communicated.

The full case study can be found here.  Other pointers are welcome to materials on computer science curriculum change in developing countries, including specific materials on the evaluation of such changes.

Why ERP Systems in Developing Countries Fail

Enterprise resource planning (ERP) systems are increasingly being used in business organisations in developing countries; also in the public and NGO sectors.  ERP promises to integrate data systems – financials, logistics, HR, etc – across the organisation; thus saving money and improving decision-making.  But the failure rate for ERP implementations is high, with particular problems found in developing country organisations.

A new research paper from the University of Manchester’s Centre for Development Informatics analyses why ERP systems in developing countries fail: https://www.gdi.manchester.ac.uk/research/publications/di/di-wp45/

It draws evidence from an in-depth Middle East case study, and first uses an analytical model based on DeLone & McLean’s work.  This gathers evidence on the success or failure of any ICT project against five evaluation criteria: system quality, information quality, use and user satisfaction, individual impact, and organisational impact.  This provides an objective basis for identifying the case study ERP system as an almost-complete failure.

A second analytical model – the design—reality gap framework – was then used to explain why this ERP implementation failed.  Using rating scale evidence gathered on seven ‘ITPOSMO’ dimensions, this shows there was a large gap between ERP system design expectations, and case organisation realities prior to implementation.

This is often true of ERP systems since they seek to make significant changes within client organisations.  However, the design—reality gap analysis was repeated later on, showing that gaps did not close during implementation, as they need to do for a successful system.

Practical recommendations for risk identification and mitigation are outlined based on closure of both specific design—reality gaps during ERP implementation, and also on a set of generic gap closure techniques such as development and use of ‘hybrid’ professionals.

In research terms, the case demonstrates the value of the DeLone/McLean model for categorisation of ERP and other information system project outcomes, and the value of the design—reality gap model for analysing project implementation, and in explaining why project outcomes occur.

A revised version of the paper has been published in the Journal of Enterprise Information Management: http://www.emeraldinsight.com/10.1108/17410391011019741

Other experiences of ERP or similar enterprise system implementations in developing countries would be welcome as comments.