Evaluating Computer Science Curriculum Change in African Universities

Effective use of ICTs in Africa requires a step change in local skill levels, including a step change in ICT-related university education.  Part of that process must be an updating of university computer science degree curricula – broadening them to include ICT and information systems subjects, moving them from the theoretical to the applied, and introducing modern teaching and assessment methods.

International curricula – such as those provided by organisations like the IEEE and the ACM – offer an off-the-shelf template for this updating.  But African universities are going to face challenges in implementing these curricula, which were designed for Western (typically US) rather than African realities.  And when curriculum change is introduced, African universities and Education Ministries need a systematic means to evaluate progress, to highlight both successes and shortcomings, and to prescribe future directions.

A recently-published case study – “Changing Computing Curricula in African Universities: Evaluating Progress and Challenges via Design-Reality Gap Analysis” – investigates these issues, selecting the case example of Ethiopian higher education.  In 2008, Ethiopia decided to adopt a new IEEE/ACM-inspired computing curriculum.  It moved from three-year to four-year degrees, introduced a new focus on skills acquisition, more formative assessment, greater diversity in teaching approaches, and a more practical engagement with the subject matter.

Most literature and most advice about changes to ICT-related curricula has tended to focus on content rather than process.  As a result, there has been a lack of systematic guidance around the implementation of curriculum change; particularly in relation to evaluation of change.

In the Ethiopian case, the design-reality gap model was brought into play since it has a track record of helping evaluate ICT-related projects in developing countries.  The explicit objectives and implicit expectations built into curriculum design were compared with the reality found after implementation.  This enabled assessment of the extent of success or failure of the change project, and also identification of those areas in which further change was required.

The gaps between design and reality were assessed along eight dimensions – summarised by the OPTIMISM acronym, and as shown in the figure below.

Using field visits to nine universities and interviews with 20 staff based around the OPTIMISM checklist, the evaluation process charted the extent to which the reality – some 18 months after the curriculum change guidance was issued by the Ministry of Education – matched the design objectives and expectations.

The evaluation found a significant variation among the different checklist dimensions, as shown in the figure below. 

For example, the new curriculum expected a combination of:

  • Specialist computer classrooms to support advanced topics within the subject area, and
  • General-purpose computer classrooms to teach computer use and standard office applications to the wider student body.

Yet in most universities, there were no specialist computing labs, and ICT-related degrees had to share relatively basic equipment with all other degree programmes.

Similarly, the spotlight focus of curriculum change on new student skills had tended to throw into shadow the new university staff skills that were an implicit design requirement for change to be effective.  The evaluated reality was one in which a largely dedicated and committed teaching community was hampered by the limitations of their own prior educational experience and a lack of computing qualifications and experience.

But progress in other areas had been much better.  The national-level environment (milieu) had changed to one conducive to curriculum change.  Formally, two new Educational Proclamations had been issued, supporting new teaching methods and new learning processes; and two new public agencies had been created to facilitate wider modernisation in university teaching.  Informally, Ministry of Education officials were fully behind the process of change.

Similarly, university management systems and structures had been able to change; assisted by the flexible approach to structures that was particularly found in Ethiopia’s new universities, and by a parallel programme of business process re-engineering within all universities.

Evaluation using the design-reality gap model was therefore a means of measuring progress, but it was also a means of identifying those gaps that continued to exist and which needed further action.  It thus, for example, led to recommendations of ring-fencing a capital fund for technology-related investments; some redirection of resources from undergraduate to postgraduate in order to deliver the necessary staffing infrastructure; and a reconsideration of some curriculum content to make it more Ethiopia-specific (in other words, changing the design to bring it closer to local realities).

There were challenges in using the design-reality gap model for evaluation of curriculum change: allocation of issues to particular OPTIMISM dimensions, and drawing out the objectives and expectations along all eight dimensions.  Overall, though, the model provided a systematic basis for evaluation, one that was assuredly comprehensive, and one through which findings could be readily summarised and communicated.

The full case study can be found here.  Other pointers are welcome to materials on computer science curriculum change in developing countries, including specific materials on the evaluation of such changes.

Advertisement