Archive

Posts Tagged ‘Project Implementation’

Can a Process Approach Improve ICT4D Project Success?

29 November 2011 5 comments

Many ICT4D projects fail[1].  There are various mooted reasons for this, of which I will highlight five here:

  • Failure to involve beneficiaries and users: those who can ensure that project designs are well-matched to local realities.
  • Rigidity in project delivery: following a pre-planned approach such as that mandated by methods like Structured Systems Analysis and Design Methodology, or narrow use of LogFrames.
  • Failure to learn: not incorporating lessons from experience that arises either before or during the ICT4D project.
  • Ignoring local institutional capacities: not making use of good local institutions where they already exist or not strengthening those which could form a viable support base.
  • Ineffective project leadership: that is unable to direct and control the ICT4D project.

This does not represent an exhaustive list of causes but one can find one or more of them in many failed ICT4D projects.  And they are deliberately selected because – if we turn them around to their mirror-image project enablers – they become the five key components of the “process approach” to development projects: beneficiary participation; flexible and phased implementation; learning from experience; local institutional support; and sound project leadership.

The process approach arose during the 1980s and 1990s as a reaction to the top-down, “blueprint” approach[2].  The blueprint approach was particularly associated with use of foreign technologies in rural development projects.  Perhaps, then, it is no surprise that it has filtered through into ICT4D practice.

Equally, though, one can see elements of the process approach in action in successful ICT4D projects:

  • Beneficiary participation: the M-PESA mobile finance project in Kenya incorporated the views of users into project design through user trials and volunteer focus groups.
  • Flexible and phased implementation: India’s agricultural information kiosk project, e-Choupal, used a pilot approach for all new services; introducing them one-by-one and planning designs and scale-up on the basis of those pilots.
  • Learning from experience: Grameen incorporated the lessons from its microfinance projects into the design and delivery of its Grameen Phone programme of rural mobile telephony.
  • Local institutional support: Brazil’s community computing project, the Committee to Democratise Informatics, is founded on the development of local institutional capacity through each of the schools it creates.
  • Sound project leadership: returning to M-PESA again, Vodafone put skilled project managers in place in Kenya in order to make the project work.

Each one of these projects – and one can no doubt find many others within the ICT4D field – demonstrates more than one of these five elements.  This is not unexpected since the process approach can be understood not as five rather arbitrarily-categorised, separate components but as an integrated whole.  It can be conceived like a wheel (see figure below[3]): flexible, phased implementation being the tyre that absorbs the bumps as the project goes along, feeding contextual information to learning from experience: the central axle from which the spokes of participation, local institutions and leadership radiate, giving strength to the whole.

 Figure 1: The ICT4D Process Approach Wheel

The process approach also reconceives the notion of success in ICT4D projects.  Instead of seeing either success or failure as cross-sectional, final judgements on a project, instead – like a point on the rolling wheel – any judgement must be seen as contingent and passing.  Instead of success and failure, we would therefore talk of multiple “successes” and “failures” as the project proceeds.  Any overall judgement would rest on relevance of the ICT4D solution, opportunities for capacity building, and sustainability.  A process approach contributes to each of these.

And for ICT4D practitioners, a process approach can help pose questions:

  • What is the role of beneficiaries throughout the project’s stages?
  • What is the mechanism for changing direction on the project when something unforeseen occurs?
  • What is the basis for learning on the project?
  • What local institutions can be used for project support?
  • What is the nature of project leadership?

And so forth – these and other questions can lead to concrete plans, schedules and roles which incorporate the lessons of the process approach into future ICT4D activities.

This blog entry is a summary of the online working paper “Can a Process Approach Improve ICT4D Project Success?“, published in the University of Manchester’s Development Informatics series.

If you have experiences of ICT4D project failure or success to share, please do so via comments.


[1] Good data on success/failure of ICT4D projects is embarrassingly limited, and more historical than recent.  See: “Information Systems and Developing Countries: Failure, Success and Local Improvisation

[2] A foundational paper is David Korten’s article “Community Organization and Rural Development: A Learning Process Approach

[3] Source: Bond, R. & Hulme, D. (1999). Process Approaches to Development: Theory and Sri Lankan Practice. World Development, 27(8), 1339-1358

Evaluating Computer Science Curriculum Change in African Universities

27 October 2011 2 comments

Effective use of ICTs in Africa requires a step change in local skill levels, including a step change in ICT-related university education.  Part of that process must be an updating of university computer science degree curricula – broadening them to include ICT and information systems subjects, moving them from the theoretical to the applied, and introducing modern teaching and assessment methods.

International curricula – such as those provided by organisations like the IEEE and the ACM – offer an off-the-shelf template for this updating.  But African universities are going to face challenges in implementing these curricula, which were designed for Western (typically US) rather than African realities.  And when curriculum change is introduced, African universities and Education Ministries need a systematic means to evaluate progress, to highlight both successes and shortcomings, and to prescribe future directions.

A recently-published case study – “Changing Computing Curricula in African Universities: Evaluating Progress and Challenges via Design-Reality Gap Analysis” – investigates these issues, selecting the case example of Ethiopian higher education.  In 2008, Ethiopia decided to adopt a new IEEE/ACM-inspired computing curriculum.  It moved from three-year to four-year degrees, introduced a new focus on skills acquisition, more formative assessment, greater diversity in teaching approaches, and a more practical engagement with the subject matter.

Most literature and most advice about changes to ICT-related curricula has tended to focus on content rather than process.  As a result, there has been a lack of systematic guidance around the implementation of curriculum change; particularly in relation to evaluation of change.

In the Ethiopian case, the design-reality gap model was brought into play since it has a track record of helping evaluate ICT-related projects in developing countries.  The explicit objectives and implicit expectations built into curriculum design were compared with the reality found after implementation.  This enabled assessment of the extent of success or failure of the change project, and also identification of those areas in which further change was required.

The gaps between design and reality were assessed along eight dimensions – summarised by the OPTIMISM acronym, and as shown in the figure below.

 

Using field visits to nine universities and interviews with 20 staff based around the OPTIMISM checklist, the evaluation process charted the extent to which the reality – some 18 months after the curriculum change guidance was issued by the Ministry of Education – matched the design objectives and expectations.

The evaluation found a significant variation among the different checklist dimensions, as shown in the figure below. 

For example, the new curriculum expected a combination of:

  • Specialist computer classrooms to support advanced topics within the subject area, and
  • General-purpose computer classrooms to teach computer use and standard office applications to the wider student body.

Yet in most universities, there were no specialist computing labs, and ICT-related degrees had to share relatively basic equipment with all other degree programmes.

Similarly, the spotlight focus of curriculum change on new student skills had tended to throw into shadow the new university staff skills that were an implicit design requirement for change to be effective.  The evaluated reality was one in which a largely dedicated and committed teaching community was hampered by the limitations of their own prior educational experience and a lack of computing qualifications and experience.

But progress in other areas had been much better.  The national-level environment (milieu) had changed to one conducive to curriculum change.  Formally, two new Educational Proclamations had been issued, supporting new teaching methods and new learning processes; and two new public agencies had been created to facilitate wider modernisation in university teaching.  Informally, Ministry of Education officials were fully behind the process of change.

Similarly, university management systems and structures had been able to change; assisted by the flexible approach to structures that was particularly found in Ethiopia’s new universities, and by a parallel programme of business process re-engineering within all universities.

Evaluation using the design-reality gap model was therefore a means of measuring progress, but it was also a means of identifying those gaps that continued to exist and which needed further action.  It thus, for example, led to recommendations of ring-fencing a capital fund for technology-related investments; some redirection of resources from undergraduate to postgraduate in order to deliver the necessary staffing infrastructure; and a reconsideration of some curriculum content to make it more Ethiopia-specific (in other words, changing the design to bring it closer to local realities).

There were challenges in using the design-reality gap model for evaluation of curriculum change: allocation of issues to particular OPTIMISM dimensions, and drawing out the objectives and expectations along all eight dimensions.  Overall, though, the model provided a systematic basis for evaluation, one that was assuredly comprehensive, and one through which findings could be readily summarised and communicated.

The full case study can be found here.  Other pointers are welcome to materials on computer science curriculum change in developing countries, including specific materials on the evaluation of such changes.

 

%d bloggers like this: