Archive

Archive for the ‘Researching ICT4D’ Category

The Research Agenda for IT Impact Sourcing

So, what is “impact sourcing” and why is it important?

It is part of a continuum of approaches that clients can take when they outsource IT-related work to bottom-of-the-pyramid suppliers, summarised in Figure 1, adapted from a previous blog entry on IT sourcing from the BoP:

  • Exploitative outsourcing seeks to bear down on wages and working conditions in order to minimise costs and maximise profits.
  • Commercial outsourcing is a mainstream approach that reflects the steady diffusion of outsourcing from cities to large towns to small towns and beyond.
  • Ethical outsourcing (also known as socially-responsible outsourcing) takes commercial outsourcing and requires that it meets certain minimum standards; typically relating to labour practices but also starting to include environmental issues.
  • Social outsourcing (also known as developmental outsourcing) differs from ethical outsourcing as fair trade differs from ethical trade.  Ethical outsourcing involves existing commercial players with either a commitment to or measurement of adherence to standards.  Social outsourcing involves new non-market intermediaries who sit between the client and the BoP supplier.

Figure 1: BoPsourcing Approaches

As shown in the diagram, “impact sourcing” is a rather loose agglomeration of a number of these models, defined as “employing people at the base of the pyramid, with limited opportunity for sustainable employment, as principal workers in outsourcing … to provide high-quality, information-based services to domestic and international clients” in order “to create sustainable jobs that can generate step-function income improvement”.

Impact sourcing received a significant fillip in 2011 when the Rockefeller Foundation released its report on “Job Creation Through Building the Field of Impact Sourcing” which suggested that this activity was already well established in countries like India, South Africa and Kenya.  (The definitional quotes above are taken from p2 of that report.)

Report authors Monitor estimated that impact sourcing was already a US$4.5 billion market employing 144,000 people and “has the potential to be a $20 billion market by 2015, directly employing 780,000 socio-economically disadvantaged individuals”.  Rockefeller has subsequently set about funding and encouraging significant growth in this market.

The various terminologies can be confusing and, personally, I prefer the more immediately-meaningful “BoPsourcing”.  However, this new model is clearly already sizeable, and likely to be growing fast in future.  It also – despite the absence from the name – has IT as a foundation: all these types of outsourcing are IT-based and IT-focused whether they involve data entry, digitisation, back-office processing, search engine optimisation support, etc.

In that case, where is the research on impact/BoP sourcing?  The answer is: almost entirely absent as yet.  The journal article on “Social Outsourcing as a Development Tool” is a rare exception, which traces the developmental impact of one initiative using this new model.

In that case, what research should we be doing: what is the impact sourcing research agenda?

A helpful guide comes from two articles recently published in the Journal of Information Technology by Mary Lacity and colleagues: “A Review of the IT Outsourcing Empirical Literature and Future Research Directions” and “Business Process Outsourcing Studies: A Critical Review and Research Directions“.  These papers review the literature to date on IT outsourcing overall, and on BPO specifically, summarise that literature in an overview model, and propose a future research agenda.

Figure 2 – from the first article – summarises the review of IT outsourcing research (including overlaps with BPO research), which boils down to the factors which affect outsourcing decisions by client firms (e.g. whether to outsource or not; or what type of contract to use), and the factors which affect the outcomes of outsourcing (typically the outcomes for the client firm or its relationship with the supplier).

Figure 2: Review of IT Outsourcing Research

Given the lack of existing work on impact sourcing, all these relations are yet to be investigated, so Figure 2 already sets a sizeable research agenda.  However, we can tease out more in three ways.

First, because Lacity et al lump rather a lot together into the “outcomes” category.  The nature of the client-supplier relationship is better understood as part of the process by which outcomes are achieved.  From this, we can identify a set of outsourcing process research that could be applied to impact sourcing – from the “COCPIT” approach to maximising client-supplier relations in IT outsourcing, to work on the development of intermediaries in IT outsourcing relations.  Treating decisions as key inputs, the research agenda can be shaped around an Input – Process – Outcome structure.

Second, because the Lacity et al map is of past research.  Their papers also identify generic IT outsourcing research priorities for the future that will apply equally to impact sourcing, including the effect of broader environmental factors on client decisions, such as public attitudes; the capabilities required within suppliers; and the different financial and business models being used.

Third, because impact sourcing is different from mainstream outsourcing: it involves different suppliers, often different intermediaries, different business models, different objectives, etc.  This adds a set of additional research agenda items not previously identified, such as:

  • Needs and means for building capabilities within BoP suppliers.
  • A broader typology of business models that spans the boundaries of traditional business and traditional development.
  • The requirement to judge business models in terms of their accessibility (to lower-income groups), ethicality (e.g. providing a decent income for the suppliers involved) and sustainability (for BoP suppliers, their clients and the intermediaries).
  • Understanding that clients may want more than just a financial bottom-line outcome from impact sourcing.
  • Analysing the developmental outcomes of impact sourcing, including the effect on the livelihoods of individual suppliers.

Putting all this together, we get the research agenda summary shown in Figure 3.

Figure 3: Impact Sourcing Research Agenda

If this research is to be done well, in a way that adds lasting knowledge, it must be well-theorised.  Dealing fully with this issue would require pages of text, but we can identify some examples:

  • Inputs and Processes: transaction cost economics can provide a quantitative basis for exploring decisions and business models; resource-/capabilities-based perspectives on organisations offer a more qualitative route (see Mahnke et al 2005).
  • Outcomes: the livelihoods framework or Sen’s capability approach can be used to assess the developmental effects of impact sourcing.

Beyond these initial pointers, though, there are many other theoretical foundations waiting to be used.

If you identify some gaps here – i.e. some other priority research issues that need to be addressed, or some other theoretical models that will be appropriate to apply to impact sourcing – then do add your thoughts.

Advertisements

The First e-Government Research Paper

30 April 2011 4 comments

Who wrote the first research paper about e-government?

I’m going to nominate W. Howard Gammon writing in Public Administration Review in 1954.  Please comment with earlier nominations, but otherwise, W. Howard Gammon becomes the godfather of e-government research.

Of course Gammon’s review article: “The Automatic Handling of Office Paper Work” doesn’t mention e-government: according to Heeks & Bailur’s “Analyzing e-Government Research”, “The term ‘electronic government’ seems to have first come to prominence when used in the 1993 U.S. National Performance Review, whereas ‘e-government’ seems to have first come to prominence in 1997.”

However, Gammon is writing about the use of ICTs in the public sector: which is a common definition of e-government.  Hence, his is an article about e-government, even though computing was just in its infancy with, as he notes, some technical literature available but very little written for a management audience and nothing – until his review article – for a public management audience.

In some ways things were very different then.  Even by around 1990, there were more than 1 million computers in use across the US federal government.  Back in 1954, there were roughly forty computers installed in total, half “large-scale” such as the UNIVAC I (weight 13 tonnes, c.2,000 operations per second, memory <1kb; cost c.US$10m in today’s terms) and half punch-card-based “baby computers” such as the IBM-604 (c.100 cards per minute, program of up to 40 steps, monthly rental cost c.US$5,000 in today’s terms plus a shift team of 2-10 supervisors and operators).  Most were in the Department of Defense with a few in the Atomic Energy Commission, Census Bureau and Bureau of Standards.  There was a pilot application to automate selection of optimum procurement bids, and plans to apply computers for use in air traffic control, taxation and weather forecasting.  These applications were part of a broader expenditure (in 1952) of more than US$1.5billion (c.US$12billlion in today’s terms) on “adding, accounting and other business machines” within US public and private sectors combined; by 2008, total spending on ICTs in the US was roughly US$1.2trillion annually – a one hundred-fold increase in spending on ICTs.

However, the more striking thing that echoes across the decades is not how different but how similar the issues in the 1950s were to those we still face today.  The following examples illustrate:

a) Skill Set: E-Gov Needs Systems Skills More Than Technical Skills: “…it is not necessary to know how to make, or even to repair, these machines in order to make use of them.  For the public administrator … the emphasis needs to be placed on how and when to use these new devices” (p63).  Just so, for those learning today about e-government, understanding technical aspects is of relatively limited importance; much more important is to understand the application of the technology.  Put another way, e-government must be approached from an information systems not an information technology perspective: “it is a systems job which depends more on knowledge of what must be done, and why, than on knowledge of what makes electronic computers tick.” (p73).

b) Skill Set: E-Gov Needs Hybrids: a socio-technical approach is required that combines understanding of the ‘business’ of government with knowledge about the application of technology.  Such a combination could be undertaken within a team: a “joint effort between the business managers and the engineers, so that engineers may learn enough about the businessman’s problem to translate the requirements of the job into machine procedure and so that management staff may learn enough about the capabilities and limitations of electronic machines to allow management staff to visualize how the new devices can be applied and how the … organization must be changed to take full advantage of the capabilities of the new equipment.” (p67)  Such a combination might also be effected within a single person to create a socio-technical “hybrid” individual.  But in that case, it will be far easier to hybridise a mainstream manager than an IT person: “As the Metropolitan Life Insurance Company found in its study, it is far easier to teach company management specialists what they need to know about the possibilities and limitations of electronic data processing than it is to teach electronic engineers about the internal operating problems of the life insurance business.” (p73).  The exact same findings were reported in the 21st Century for e-government in Chapter 12 of Heeks’ book “Implementing and Managing eGovernment”.

c) Implementation: E-Gov Needs Re-Engineering Not Just Automation: More than thirty-five years before Hammer’s exhortations to stop “paving the cowpaths” and stop “automating the mess”, Gammon had already identified the limited gains to be made from automation, and the need to start improvements by re-engineering the business processes of govenrment: “One quick generalization may be made: the introduction of an electronic information processing system is not like buying a new adding machine which can be plugged in as part of an existing established clerical routine. It would be foolish and wasteful to make the large investment required to install electronic methods without first conducting a careful study which begins with considering the basic objective of the operation” (p73) … “The effective application of electronic methods in a given organization requires a rethinking of its organization and procedures. When electronic methods are applied, many of the intermediate reports and steps in the transmission of information become unnecessary and should be eliminated.” (p72)

d) Implementation: E-Gov Needs Top Management Support: “Rapid progress can be made during such an investigation only if the management representatives are high enough in the organization to make the broad decisions regarding the methods of operation” (p67).  In the same way, more recently, top management support is still identified as a key necessity in successful e-government projects and its absence as a key cause of e-government project failure.

e) Implementation: Politics Matters in E-Gov: “there are organizational, procedural, economic, and social problems which must be resolved before automatic operation of … an office can be realized.” (p63).  Some of these problems relate to internal politics given the danger that ICTs in government will cause “the disturbance of established bureaucratic empires” (p72), thus making political factors an important cause of e-government failure.  This further explains why top management support is needed in implementation of e-government: “It also requires a broad point of view which looks to the good of the organization as a whole without being too much concerned about the effects of changes in methods on particular vested interests in the agency.” (p73).

f) Impact: E-Gov Affects Clerical Not Professional and Managerial Jobs: for lower-level clerical jobs, ICT brought the threat of “lowered prestige, relative decrease in real income, threat of unemployment, and routinization of many office skills” (p66).  That has come to pass: around 25% of federal white-collar employees were in clerical/typing work in 1952.  By the mid-2000s that had fallen to roughly 7%, largely as a result of new technology[1].  Meanwhile, skilled professionals would either be upskilled: “our accountants will then be free to do the more important job of analyzing and interpreting financial reports for management.” (p66) or unaffected: “there is no real possibility that the executive or the top administrator will become obsolete as the result of foreseeable advances in the use of electronic equipment.” (p73).  And there were already signs that shortages of ICT professionals would slow the rate of e-government: “the shortage of qualified experts to design, build, program, and service these electronic data processing systems will keep this possible revolution from taking place rapidly.” (p67).  More than fifty years later, Heeks (2006:101) still writes “The dearth of competencies is a major brake on the spread of e-government”.

g) Impact: E-Gov Impact Assessment Fails to Account for Total Cost of Ownership: there have always been ambitious claims for ICT in government e.g. that it “can make substantial savings and render better service” (p63).  But on the savings side, e-government impact calculations often focus just on cost savings (e.g. of labour) but fail to include the costs of ICT.  When the latter are taken into account, overall gains can disappear.  Gammon’s paper is suggestive of this: he reports a case where preparation time for monthly reports fell from forty person-days to six/eight hours.  Given clerical costs (in 1954 prices) of around US$200 per month, that would represent savings of about US$400 per month.  Yet according to P.B. Hansen in “Classic Operating Systems” the IBM Card Programmed Calculator on which this saving was achieved cost US$1,800 per month in rental to which would have to be added the costs of calculator operations staff.  Government’s tendency for lavish spending on ICTs was also already in evidence with reference to a Navy-organised symposium on “moderately-priced” computers; that criterion being defined as those costing (in today’s terms) less than US$1million.

Gammon’s paper is as much a review of then-current ideas about computing, drawn largely from the private sector, for a public sector audience as it is about computing in the public sector.  However, this focus means it still stands eligible for recognition as the first e-government journal article.

How, overall, should we read it?  I invite you to choose from its reflecting:

– “La plus ça change, la plus c’est la même chose”

– The failure of e-government practitioners to take note of key lessons known right from the start of IT in the public sector, given the continuing absence in e-government projects of many of the skill and implementation factors identified all those years ago.

– The failure of e-government researchers to find much new to say: you can see these same issues still in the conclusions of many of today’s e-gov journal articles.

Click here to link to a blog entry on the first application of e-government in a developing country.


[1] Though total US federal employment in 2009 – just under 2.1 million – was almost exactly what it was in 1952; albeit with a near-halving in DoD numbers.

Development Studies Journal Ranking Table

17 June 2010 17 comments

The following represents a first attempt at a “league table” for development studies journals.

Rank Journal Citation Score
1 World Development 6.04
2 Journal of Development Studies 4.90
3 Oxford Development Studies 4.06
4 Development Policy Review 3.20
5 Studies in Comparative International Development 2.40
6 Sustainable Development 2.39
7 European Journal of Development Research 1.90
8 Development and Change  1.89
9 Information Technology for Development 1.58
10 Information Technologies and International Development 1.55
11 Journal of International Development 1.46
12 Development 1.33
13 Third World Quarterly 1.30
14 Public Administration and Development 1.21
15 Development in Practice 1.03
16 Progress in Development Studies 0.88
17 Electronic Journal of Information Systems in Developing Countries 0.81
18 African Development Review 0.79
19 Gender and Development 0.58
20 Enterprise Development and Microfinance 0.45
21 Canadian Journal of Development Studies 0.45
22 IDS Bulletin 0.40
23 Information Development 0.37
24 Forum for Development Studies 0.17
25 Journal of Third World Studies 0.11
     
  Comparator Journals  
  Journal of Development Economics 10.90
  Human-Computer Interaction 4.06
  Environment and Planning D 3.42
  Information Systems Journal 2.89
  The Information Society 1.64
  Mountain Research and Development 0.91

Basis

– Selection was on the basis of development studies journals that appear in various other tables or lists.  However, development economics journals (inc. Economic Development and Cultural Change, Journal of Development Economics, Review of Development Economics, and The Developing Economies) were not included.  If you have suggestions for additions (or deletions), then let me know.

– Citation score is calculated by taking papers published in each journal in 2008 and identifying how many times each paper is cited in Google Scholar.  The average number of cites per paper was then divided by the average number of years since publication.  Very roughly, then, the score equates to average number of GS citations per paper per year.

– All papers published in 2008 were used if less than 20 were published; a sample of at least 20 building outwards from the mid-year issues was used if more than 20 were published.

– One anomalous paper, with over 10 times the citations of any other (a pattern not seen in any other journal), was omitted from African Development Review.  Had this been included, ADR would place seventh.

– This exercise will be repeated and expanded in future years.  What is presented here should only be seen as a first, fairly rough-and-ready set of figures.  The original data used for the calculations can be found here.

Notes

– The raw figures shown here should not be compared with the impact factor scores under Planning and Development provided in ISI’s Journal Citation Reports.  The rankings can be compared.

– Different disciplines have different citation habits and norms.  Specifically, if economists cite more highly, then those development studies journals that include a greater proportion of development economics papers may gain a greater overall citation score.

– Conversely – and requiring further investigation – in compiling the figures, I got some sense that papers in special issues tend to receive fewer citations.  Journals that have a lot of special issues may receive a lower overall citation score.

Reflections

– These average figures provide no guidance on whether your individual paper would be cited more highly if published in one journal or another.  However, the rankings could be used to provide guidance or evidence on the general impact of a selected journal.  (Of course recognising that overall impact is about more than just citations.)

– The figures suggest that, beyond the obvious top two of JDS and World Development, there may be some mismatch between previous subjective ratings and actual impact.  For example, Oxford Development Studies and Development Policy Review rank 3rd and 4th here, yet are unrated by most other journal rating schemes.

– There is a moderate mismatch with the ISI JCR 2008 impact factor ranking.  Most notably, four of the top ten journals here do not appear at all in the ISI list including the two top-cited ICT-for-development journals.

Other Data

– The table below gives details of other ranking and rating data on development studies and some development economics journals.

High->Low Aston 2008 (4->0) CNRS 2008 (1*->4) Ideas 2010 (/731) SJR 2010 (/118) WoK 2010 (/43) ABDC 2010 (A*->C) ABS 2010 (4->1) SoM 2010 (4->1) Heeks 2010 (/25)
African Development Review       65 43     2 18
Canadian Journal of Development Studies       78 42       21
Development     666 28         12
Development and Change 2 2   15 19 B 2   8
Development in Practice       32         15
Development Policy Review     270 10 8       4
Economic Development and Cultural Change     117   24 A 3 4  
Electronic Journal of Information Systems in Developing Countries                 17
Enterprise Development and Microfinance                 20
European Journal of Development Research     438 48         7
Forum for Development Studies                 24
Gender and Development       73         19
IDS Bulletin       70 37       22
Information Development                 23
Information Technologies and International Development                 10
Information Technology for Development                 9
Journal of Development Economics     43   36 A* 3 4  
Journal of Development Studies 2 2 152 2 26 A 3 4 2
Journal of International Development 1 3 292 22   B 1 1 11
Journal of Third World Studies       86         25
Oxford Development Studies     192 58       1 3
Progress in Development Studies       30         16
Public Administration and Development       62 39 A 2 2 14
Review of Development Economics     129 26 32     1  
Studies in Comparative International Development       23 31 A     5
Sustainable Development       9 11       6
The Developing Economies     474   35 B      
Third World Quarterly   2   29 30 A 2   13
World Development 3 1 134   9 A 3 3 1
High->Low Aston 2008 (4->0) CNRS 2008 (1*->4) Ideas 2010 (/731) SJR 2010 (/118) WoK 2010 (/43) ABDC 2010 (A*->C) ABS 2010 (4->1) SoM 2010 (4->1) Heeks 2010 (/25)


Key
 
– ABS – UK Association of Business Schools: http://www.the-abs.org.uk/?id=257

– Ideas – citation data from RePEc project of paper downloads: http://ideas.repec.org/top/top.journals.simple.html (economics and finance research)

– SJR – Scopus-based citation ranking: http://www.scimagojr.com/journalrank.php?category=3303&area=0&year=2008&country=&order=sjr&min=0&min_type=cd (development journals)

– SoM – Cranfield School of Management: https://www.som.cranfield.ac.uk/som/dinamic-content/media/SOM%20Journal%20Rankings%202010%20-%20alphabetical.pdf

– WoK – 2008 impact factor in ISI Journal Citation Reports under Planning and Development

– All other data from Harzing’s Journal Quality List: http://www.harzing.com/jql.htm

ICT4D Conference Papers: Impact and Publication Priority

28 April 2010 3 comments

In two previous blog entries, I noted from both general ICT4D citation data in ISI’s Web of Knowledge (WoK) and from specific citation data on my own ICT4D publications that conference papers receive by far the lowest citations per paper compared to any other type of publication (journal article, book chapter, online working paper, etc), and that some 90% are uncited.

The conclusion was that conference papers should be the lowest priority as a form of publication for those working in the ICTs-for-development field if citation impact was key (recognising there are many other good reasons for presenting at a conference).

BUT . . . there were two limitations to this earlier data: the first set is restricted to the WoK, whereas Google Scholar is arguably a better reflection of ICT4D impact; the second set reflects my own publication only in social science conferences.

So the question arises: what about publishing in more technical or in multi-disciplinary conferences?  This issue applies particularly for those working at the more technical end of the ICT4D field because norms in science/technology fields are very different from those in social science.

Overall, 21% of science publication is in conference proceedings; only 8% in social science1.  And the importance of conferences is particularly acute in computer science.  One recent estimate2 finds that “in Computer Science, proceedings volumes rather than scientific journals constitute the main channel of written communication”.  Comparing conference papers and journal papers in CS, they have a roughly equal average no. of citations per paper, and in the average conference paper two-thirds of citations are to other conference papers, only one-third to journal articles.  (Non-citation rates are also similar – about half of all CS conference papers and journal articles are uncited.)

So armed with this, and focusing on Google Scholar (GS) rather than WoK, I will examine the citation impact of a variety of conferences, and compare them both with each other, and (below) with ICT4D journal publication.  (Table is in reverse chronological order.)

Conference Type Average GS Citations Per Paper Impact Score Citation Score
IFIP WG9.4 2009 ICT4D Soc. Sci. 0.00 0.00 0.00
ICTD2009 ICT4D Multi 0.81 0.65 0.81
ICTD2007 ICT4D Multi 6.27 3.56 2.73
IFIP WG9.4 2007 ICT4D Soc. Sci. 1.26 0.21 0.43
CHI2007 Comp. Sci. 20.39 7.03 7.03
ICIS2006 Info. Systems 1.73 0.39 0.52
ICTD2006 ICT4D Multi 13.4 3.43 3.43
EADI2005 Devel. Studies 0.06 0.00 0.01
DSA2005 (ICT4D papers only; no link to conference) Devel. Studies 1.00 0.06 0.22
IFIP WG9.4 2005 ICT4D Soc. Sci 1.07 0.07 0.22

What does this data show? (other than “not much” due to the very small sample size!)

First, that the average paper in a social science conference (whether in ICT4D or development studies or information systems) is hardly cited.  This supports the data from analysis of my own ICT4D publications, suggesting very low citation impact from publishing in social science conferences.

Second, as noted above, that average citation rates in technical, computer science conferences do seem to be much, much higher.

Third, that average citation rates in multi-disciplinary conferences such as the ICTD conferences that span both the technical and the social, are somewhere in between.

Conclusions About Conferences

Before drawing some conclusions, just a reminder of what you can NOT conclude from this data.  You cannot conclude that, if you present your paper in a particular conference type, you will achieve the average citation rate.  The determinants of how many citations your specific paper gets are multi-factoral, include factors such as research quality, topic, timing, author identity and networks, etc. (See earlier blog entry on what constitutes good ICT4D research.)

BUT . . . one of those factors will be conference type.

So what you can use the data to help answer is this question: if I already have my paper then, citation-wise, which is the best conference outlet I can choose?

The answer appears to be: the more technical, the better.

There is even a tiny data nugget on that.  Let’s compare the social science ICT4D papers that were submitted to ICTD2007 and IFIP WG9.4 2007.  Prima facie, there is no clear reason for suspecting any major difference in quality – both conferences use refereeing and review processes, and there are some similarities in topics too.

The social science (IFIP) conference had an average 1.26 citations per paper (1.76 counting only those available online).  The social science papers at the multi-disciplinary (ICTD) conference had an average 2.86 citations per paper.  That suggests at least the possibility of a “citation uplift” effect from presenting a social science ICT4D paper at a conference with some technical papers/culture.  (Melissa Ho notes it would be interesting to do a citation map to see if this occurs due to citation across disciplines.)

Conferences vs. Journals

The table below compares the two leading ICT4D conferences with the three leading ICT4D journals (data mainly from earlier blog entry on ICT4D journal ranking).  The results suggest that – all other things being equal – publication of a paper in certain ICT4D conferences can be on average more impactful than publication in the leading specialist journals.  But you need to pick your conferences.

And, to repeat, there may be many factors beyond citation to consider in choosing conferences, and in choosing conference vs. journal, including audience, ability to network, location, quality thresholds, etc.  And the specific impact on your individual paper is uncertain.

Outlet Type Average GS Citations Per Paper Impact Score Citation Score
ICTD2007 Conf. Multi 6.27 3.56 2.73
Information Technology for Development 2008 Journal 2.85 1.35 1.58
Information Technologies and International Development 2008 Journal 2.79 2.08 1.55
Electronic Journal of Information Systems in Developing Countries 2008 Journal 1.45 1.00 0.81
IFIP WG9.4 2007 Conf. Soc. Sci. 1.26 0.21 0.43
         
ICTD2006 + ITID Conf. Multi + Journal 24.4 7.88 7.88

What about that final row?  That looks at those papers (excluded from earlier ICTD2006 calculations) that were presented at a conference and then subsequently published in a journal.  These were identified as the “best” papers at the conference. That will affect their citation level, and limit the conclusions one can draw.  But a likely and very obvious point is that combining conference and journal publication in this way increases citation.

The Small Print

As reminder from the earlier blog entry on ICT4D journal ranking:

  • Impact score =  (average cites per paper*(1-((uncited-unlisted papers)/2)-unlisted papers)/average no. years since publication)*conference paper accessibility
  • Citation score = average cites per paper/average no. years since publication

The raw data is here if you wish to footle around with your own calculations.

More authors means more citations3, but the variation due to author numbers is nowhere near large enough to explain the variation in citation averages seen, especially as the correlation coefficient between citations and authors is perhaps around one-third.  (For 2007 conferences, CHI = 3.1 authors per paper on average; ICTD = 2.8 authors, IFIP = 2.3 authors)

Supporting the idea of different research and citation cultures: for four of the papers presented at ICTD2006 and then published in ITID journal – two technical, two social science – one could tentatively identify how many citations came from the conference paper, and how many from the journal article.  For the two technical papers, 87% of the citations were from the conference paper version.  For the two social science papers, 89% of the citations were from the journal article version.

There are some general caveats: some conference papers appear in various guises and more so than journal articles: as working papers, in institutional repositories, as journal articles.  I’ve done all I can to eliminate this; only selecting those papers at a conference that were listed in Google Scholar in their conference guise; and ignoring them if there was any uncertainty.  But, nonetheless, I still regard the conference paper citation data as less robust than that for journal papers.

Thanks to Kentaro Toyama and Melissa Ho for sparking this blog entry.

1 Bourke, P. & Butler, L. (1996) Publication types, citation rates and evaluation, Scientometrics, 37(3), 473-494

2 Moed, H.F. & Visser, M.S. (2007) Developing Bibliometric Indicators of Research Performance in Computer Science, Centre for Science and Technology Studies, Leiden University http://www.cwts.nl/pdf/NWO_Inf_Final_Report_V_210207.pdf

3 Sooryamoorthy, R. (2009) Do types of collaboration change citation?, Scientometrics, 81(1), 177-193

ICT4D Journal Ranking Table

14 April 2010 4 comments

In earlier blog postings on the best type of publication outlet for ICT4D research and on ICT4D research quality and impact, I surmised that there is value in publishing ICT4D research in specialist ICT4D journals.  But I skirted round the issue of which ICT4D journal to publish in.  Here, then, is an ICT4D journal “league table”:

ICT4D Journal Impact Table

  Journal 2005 Score 2008 Score Overall Score
1 Information Technologies and International Development 2.61 2.08 2.35
2 Electronic Journal of Information Systems in Developing Countries 3.62 1 2.31
3 Information Technology for Development 2.94 1.35 2.15
4 African Journal of Information and Communication 1.09 0.4 0.75
5 International Journal of Education and Development Using Information and Communication Technology 1.01 0.4 0.71
6 Asian Journal of Communication 1.16 0.23 0.70
7 Journal of Health Informatics in Developing Countries n/a 0.43 0.43
8 Information Development 0.35 0.25 0.30
9 International Journal on Advances in ICT for Emerging Regions n/a 0.26 0.26
10 African Journal of Information & Communication Technology 0.25 0.04 0.15
11 South African Journal of Information Management 0.28 0 0.14
12 African Journal of Information Systems n/a 0.05 0.05
13 International Journal of Information Communication Technologies and Human Development n/a 0.01 0.01
14 Asian Journal of Information Technology 0.01 0 0.01
15 Asian Journal of Information Management n/a 0 0.00
International Journal of ICT Research and Development in Africa n/a n/a n/a
   
World Development 8.96 5.95 7.46
Information Systems Journal 7.62 2.71 5.16
Human-Computer Interaction 5.34 3.85 4.60
The Information Society 5.98 3.10 4.54
Journal of International Development 2.44 1.28 1.86

The ICT4D journals (top part of the table) were selected on the basis of all journals with titles that combine some reference to informatics/ICTs, and some reference to development (as in “international development”) or developing countries or regions dominated by developing countries.

How are the impact scores calculated?  Using the formula: (average cites per paper*(1-((uncited-unlisted papers)/2)-unlisted papers)/average no. years since publication)*journal accessibility

So this is a league table of impact, based mainly on the average number of citations per paper, and calculated by taking every paper for the given year published in a journal and seeing how many citations for that paper (if any) are shown in Google Scholar, then averaging the total.  The citation figure is amended to take account of the length of time since publication, and of both the proportion of papers that have no citations, and the proportion of papers that aren’t even listed on Google Scholar.  Because not all impact is accounted for by citation, open access journals – which will attract many more readers – are given an additional weighting of 1.5.  These calculations were undertaken for both 2005 and 2008 for the journals that go back that far, and an average of the two scores taken.

Clearly, there are subjective elements in this and you are welcome to provide a critique.  Here, you can find the raw data, allowing you to create your own ranking formula.  One alternative would be a simple table based just on average citation rates per year, as shown below:

ICT4D Journal Citation Table

  Journal 2005 Score 2008 Score Overall Score
1 Information Technology for Development 2.94 1.58 2.26
2 Electronic Journal of Information Systems in Developing Countries 2.69 0.81 1.75
3 Information Technologies and International Development 1.82 1.55 1.69
4 Asian Journal of Communication 1.19 0.4 0.80
5 African Journal of Information and Communication 0.87 0.44 0.66
6 International Journal of Education and Development Using Information and Communication Technology 0.77 0.39 0.58
7 Journal of Health Informatics in Developing Countries n/a 0.42 0.42
8 Information Development 0.4 0.37 0.39
9 International Journal on Advances in ICT for Emerging Regions n/a 0.28 0.28
10 African Journal of Information & Communication Technology 0.24 0.06 0.15
11 South African Journal of Information Management 0.26 0 0.13
12 International Journal of Information Communication Technologies and Human Development n/a 0.11 0.11
13 African Journal of Information Systems n/a 0.06 0.06
14 Asian Journal of Information Technology 0.04 0 0.02
15 Asian Journal of Information Management n/a 0 0.00
International Journal of ICT Research and Development in Africa n/a n/a n/a
   
World Development 8.96 6.04 7.50
Information Systems Journal 7.62 2.89 5.26
Human-Computer Interaction 5.34 4.06 4.70
The Information Society 5.98 3.23 4.60
Journal of International Development 2.49 1.46 1.97

Conclusions

The general conclusion is the same for both league tables: that the three journals EJISDC, ITID and ITforD are head and shoulders above the rest if you wish to publish in an ICT4D specialist journal and if you are interested in citation-related impact when publishing your ICT4D research.

But would you be better advised to publish in a journal in one of ICT4D’s “parent” disciplines?  To allow a comparison, both tables include the same calculations for:

  • Development studies: the top journal (World Development) and a lower-ranked journal (Journal of International Development)
  • Information systems: a top journal (Information Systems Journal) and a mid-ranked journal (The Information Society)
  • Technical informatics/computer science: a top journal (Human-Computer Interaction)

This suggests that these disciplinary journals have on average two-four times the impact of the ICT4D specialist journals.  That average suggests – but does not prove – that your specific article will have a greater impact in these journals.  In addition, academic career kudos is greater for at least the higher-end disciplinary journals.  Only one ICT4D specialist journal – Information Development – is currently listed in ISI’s Web of Knowledge.  One journal (AJC) gets an “A” ranking and six journals (AJICT, EJISDC, ID, ITID, IJEDUICT, SAJIM) get a “C” ranking in an Australian list (see: http://lamp.infosys.deakin.edu.au/journals/index.php?page=alljournals – a fantastic resource on ICT and information systems journals maintained by John Lamp).  But that is the exception: none of the ICT4D specialist journals is ranked in the UK Association of Business Schools journal ranking list (http://www.the-abs.org.uk/), the Association for Information Systems journal ranking list (http://home.aisnet.org/displaycommon.cfm?an=1&subarticlenbr=345), or Anne-Wil Harzing’s journal quality list (http://www.harzing.com/jql.htm).

On the other hand . . .

  • you will only get accepted in the higher-end mainstream journals if your paper is above a certain quality threshold (also true for ICT4D specialist journals, but with the assumption that the threshold is somewhat lower for those journals or, at least, that their rejection rate is lower; of course the quality threshold also varies between ICT4D journals just as it does between other journals);
  • it may take more time and effort to get your paper accepted;
  • it might reach a somewhat different audience; and
  • none of these disciplinary journals is open access (figures from open-access EJISDC suggest typical rates of around 500 downloads per paper per year; figures from subscription-based ITforD suggest typical rates of around 100 accesses per paper per year)

Particularly if you add in the open access argument, and link it to the fact that ICT4D audiences (e.g. practitioners, strategists in developing countries) generally can’t access the subscription-based disciplinary journals, then there remains a logic for publishing ICT4D research in ICT4D journals.

ICT4D Research: How Should I Publish?

31 March 2010 8 comments

In what form should you publish your ICT4D research for maximum impact?  A book; a book chapter; a journal article – if so what kind of journal; a conference paper?

You are welcome to comment your own evidence, but I’m going to answer that question through analysis of the publications of my favourite author: me.  Why me?  Because almost everything I’ve published is in the ICT4D field; there are a large number of items, over a long period, and in many different formats; and careful selection of my name at birth ensures very few false positives in citation searches.  But most importantly, I have access to the list of all items I’ve ever published, and so can include those that have never been cited.

The method: dividing out all my publications into different categories.  Then, for each one, identifying how many citations are shown on Google Scholar; and then working out the average number of citations per publication for each category.  The results are as follows:

Publication Type Mean Citations per Item Median Citations per Item % Items Never Cited n
Single Authored Book   96   48   0% 4
Refereed Journal Article (WoK-listed journal)   54   30   6% 17
Working Paper (available online)   27   8   22% 46
Refereed Journal Article (non-WoK-listed journal)   9   5   20% 15
Report / Handbook (available online)   6   3   29% 14
Book Chapter   9   0   55% 40
Magazine / Professional Journal Article   1.4   0   69% 80
Conference / Seminar Presentation   0.6   0   92% 98

 Notes on the data:

– I’ve excluded web sites (like http://www.egov4dev.org) and this blog.  If I did analyse them, then putting all such items together, the vast majority (90%+) of individual pages and blog entries remain uncited, but altogether there are 113 citations, 82 of those from the egov4dev web site.  So a successful web site can be as highly-cited as a book.

– I’ve also edited two books, but not included them in the table for two reasons.  First, because – in a strict sense – I don’t think one should cite an edited book; one should cite from the individually-authored chapters within that book.  Second, because the two had such wildly-divergent fates, “average” makes no real sense: one is cited 9 times; one is cited 363 times.  (And, yes, I have sample checked that those citations refer just to the overall book and not to any individual chapter.)

– That pattern of very skewed data between items – some scoring well, others totally flopping – is found across virtually all types of output.  That’s why the table includes both mean and median scores.

– WoK is the ISI Web of Knowledge; inclusion of a journal in this is a rough quality benchmark.  I could have redone the citation figures using WoK instead of Google Scholar but I’ve looked at the data and the overall pattern is the same; just that citation numbers are around one-quarter of those for GS.

Conclusions From The Data:

– This is data from just one person: caveat lector.

– Trying to work out what differentiates a citation hit from a citation flop, one might think research quality – see separate entry on that – was part of the story (not, of course, that my publications would ever be anything but very high quality!).  But I don’t think it is.  To me, timing looks a big factor.  Write something half-decent in a sub-area that subsequently grows, and you’ll get a lot of cites (and, arguably, have some degree of influence in that sub-area).  Write something great in a mature field and you’ll get many fewer citations (of course in part because you’re then fighting for attention with far more “competing” publications).  The slightly depressing conclusion: if it’s citations you’re after, keep up with research fads and fashions.

– And then what about the answer to the initially-posed question on type of publication to choose?  I’ve arranged the table in what I would say is decreasing overall impact order, and I’ll comment on each category

– – Authored book: still the best way to have an impact, and the data here probably understates since it includes two quasi-turkeys from the 1980s when I had 100% no clue what I was doing (now I just have 50% no clue).  The two published more recently have an average 185 cites each and rising.  If we compared that to the median journal figure, then you would ask yourself – is the effort of writing a book more or less than that for writing six refereed articles in decent journals?

– – Pukka journal article: still the most obvious way to build a citation profile.

– – Edited book: editing a book is easier than writing one.  Editing an impactful book is probably not.  I’d suggest you need to write or co-write a number of start and end chapters yourself; and impose such a degree of consistency of argument, style and structure that you are often almost rewriting other contributions from scratch.  Plus see comment on book chapters below.

– – Working paper: can be highly effective but a lot probably depends on the profile and dissemination of the series in which you publish.  You get zero academic brownie points, but you can maybe modify into a journal article.

– – Not-so-pukka journal article: can be worth doing, particularly as it’s likely easier to get accepted than in a top journal.  But note that publishing in the specialist journals for your field may be much less effective than aiming for higher-quality journals in the “parent” disciplines.  And – taking those averages at face value – ask yourself; will my purposes be better served by spending one month on a not-so-pukka journal; or spending six months getting my paper into a decent journal.  The ratio of time invested:average citation is the same.  The impact at least on an academic career is far higher for the latter.  (See below for an ICT4D specialist journal list.)

– – Reports, handbooks, and professional magazine articles: you may often be reaching out to a practitioner audience, so citation rates may be a poorer reflection of impact.

– – Book chapters: a high-quality edited book project in which the editor invests a lot of time might be worthwhile.  Otherwise, leave this to those who are just starting to build a publication record.  As for edited books from publishers who churn out an endless series of such items: caveat scriptor.  In citation terms, your chapter is likely to disappear from view faster than a golfing superstar after a car accident (this joke will self-erase when Tiger wins the Masters).

– – Conference and seminar presentations: I already discussed this in the previous blog entry on ICT4D research quality and impact.  If your presentation will lead on to a journal publication, it’s a slightly different matter.  Otherwise, attend conferences for lots of reasons.  But not in order to get cited.

ICT4D Specialist Journals:

If you’re looking to get published in a specialist ICT4D journal, then the following represents the current English-language list I’m aware of (please comment to add others):

What shall I say about this curate’s egg of a list?

Perhaps just that I have only had the pleasure of publishing in EJISDC, ITforD, and ITID.  That I have only had the pleasure of reading articles from these three plus AJIC, AJIS, AJC and ID.  That ID is the only that is WoK-listed.  And that AJICT, AJIT and IJAICTER look fairly techie, though with the occasional softer article.  Most are open access except for AJC, AJIT, ID, ITforD, IJICTRDA and IJICTHD.

If you want to cross-check against a journal ranking list, see: http://www.developmentinformatics.org/resources/journalranking.html – I could only spot AJC ranked A, and AJICT, ID and ITID ranked C.

You can find much longer unranked lists which include journals in parent informatics and development disciplines plus adjacent areas such as telecoms maintained by:

The IPID network provides a regular and very useful circulation of journal calls and contents.  If you’re not linked in to IPID, you should be.  To get on the circulation list, email: gudrun.wicander {at} kau.se

ICT4D Research: Quality and Impact

23 February 2010 14 comments

In a comment on an earlier post, Dmitry Epstein asked about the quality and impact of ICT4D research.  I’ve already blogged about what constitutes good quality ICT4D research, so here I will just add a few data snippets on these topics.

Quality of ICT4D Research

The good news is that there is good quality ICT4D research around:

– It makes its way into the top journals: there’s the 2007 special issue of MIS Quarterly (top info. systems journal) on IS in developing countries; and I count at least four articles on ICTs and developing countries in World Development (top dev. studies journal) during 2009.

– Inclusion in ISI’s Web of Knowledge is a rather rough quality benchmark but it’s a benchmark nonetheless and, as noted in a previous post, there are a few hundred papers per year within the boundaries of development informatics recorded on WoK.

– On the assumption that citation rates are linked to quality, some individual ICT4D items score well.  The article “Information Systems in Developing Countries: Failure, Success and Local Improvisations” gets 44 citations on WoK; 217 on Google Scholar.  (Modesty, be damned: if you don’t believe in the quality of your own work, you shouldn’t be writing.)

But then there’s the bad news about ICT4D research quality:

– Of the eleven specialist journals in the field only one, Information Development, is seen as worthy of inclusion in the WoK.

– My subjective but honest opinion based on reading all 250 papers submitted to the ICTD2009 conference: the general quality of the big, fat, long tail of work in our field is pretty poor.

And lastly, there’s the average news, as conveyed next, that citation evidence (if a proxy for quality) suggests ICT4D research is no better or worse than other sub-fields of research.

Impact of ICT4D Research

Citation evidence is only one part of the impact story but . . . I’ve already noted that some individual ICT4D papers get highly-cited.  And I should also note a bit of background on citation: work in early 2009 looked at citation rates for papers published during 1998-2008.

In computer sciences, the average number of citations per paper was 3.06, ranging from 7.06 for papers published in 1998 to 0.10 for papers published in 2008 (papers take time to pick up citations).  Equivalent figures are: economics and business – 5.02 average (from 10.19 to 0.13 for 1998 – 2008); and social sciences – 4.06 average (from 7.48 to 0.16).

Note these are narrow citation rates for WoK-type databases; Google Scholar citation rates are much higher because they record a much wider range of published material: a 1:4 ratio of WoK:Google Scholar citation numbers looks about standard.  Note also that even for papers in peer-reviewed articles, somewhere between 25% and 75% of articles – varying by discipline – are never cited in other peer-reviewed articles.

Some data bits:

– Comparing the total number of WoK citations for the four articles in MISQ’s special issue on IS in developing countries (vol.31, issue 2), with those for the first four articles in the next issue, shows no difference: both sets get 14 citations.

– Taking those (15) journal articles published in 2006 that come up on WoK using a search of ‘ict*’ and ‘developing countr*’ (and excluding out-of-topic items), they are cited 34 times: an average of 2.27 times.  Comparing figures with 2005 for the work reported above (which came out about one year ago, hence the need to move back one year for a comparison), that 2.27 figure is somewhat better than the average for computer science (1.85) but a bit worse than that for social sciences (2.99), and economics and business (3.08).  [For comparison, substituting ‘agricultur*’ for ‘ict*’ and taking those listed as being in the public administration subject area; the 15 on-topic articles were cited 25 times: 1.67 average.]

– Taking the same type of journal articles over the period 2004-2007, there are 46 in all of which 22 are uncited: 48%; average citation rate over all items is 1.67 cites per article.  [4 of 15 agriculture articles in 2006 were uncited compared to 3 of 15 ict4d.]

– Performance in open access journals outside WoK is lower.  Both EJISDC and ITID journals had four issues in 2007.  EJISDC has so far scored 11 citations in WoK from 26 papers (0.42 citations per paper average); 18 (69%) papers uncited.  ITID has so far scored 23 citations from 19 papers (1.21 average); 8 (42%) papers uncited.  [For comparison, I took four issues of EJEG from late 2006-2007: 20 citations from 32 papers (0.63 citations per paper average); 24 (75%) of papers uncited.]

– And finally some Alexa comparisons of traffic and links for the two main online ICT4D journals and those in other information systems sub-areas (note ITID has only very recently gone online):

Journal Alexa Traffic Rank Sites Linking In Online Since
Electronic Journal of Information Systems in Developing Countries 1,422,355 120 2001
Information Technologies and International Development 4,527,166 35 2009
       
Electronic Journal of e-Government 1,785,669 72 2002
Electronic Journal of e-Learning 2,092,075 126 2002
Electronic Journal of Knowledge Management 2,286,374 96 2002
Journal of Community Informatics 4,535,331 129 2004

My conclusion from all this: in terms of impact, ICT4D looks like a pretty standard research sub-field.  We’re not punching above our weight but neither are we left out in the cold.

Overall Conclusion

The sample sizes here are small, so conclusions are indicative rather than definitive:-

– Where ICT4D research is good enough to get into peer-reviewed WoK-covered journals it is of similar quality and with similar impact to other sub-fields.  Where it is good enough to get into peer-reviewed open access journals, the same is true.  And where it’s not good enough for that, it’s probably pretty bad but, again, the same is no doubt true of other sub-fields.

– If you want your work to have maximum impact then, on average, your best bet is to publish in a traditional journal.  But open access, non-WoK-listed journals should not be regarded as “no go” areas: they are read and used.

– The average (modal) ICT4D article is never formally cited, but that’s true of many research fields.

– If you want to feel better about the citation impact of your work, stick to Google Scholar.

– Finally, what about conferences?  Of about 120 papers on ICT4D published during 2003-2008 and covered by WoK and listed as having come from conferences, six are cited having been published in journals; three are cited from their proceedings.  The remaining 90%+ whether in journals or proceedings are uncited.  Conclusion: conferences may be good for networking, learning, and scenery; they are not good if you want your work to have a citation impact.

%d bloggers like this: