In two previous blog entries, I noted from both general ICT4D citation data in ISI’s Web of Knowledge (WoK) and from specific citation data on my own ICT4D publications that conference papers receive by far the lowest citations per paper compared to any other type of publication (journal article, book chapter, online working paper, etc), and that some 90% are uncited.
The conclusion was that conference papers should be the lowest priority as a form of publication for those working in the ICTs-for-development field if citation impact was key (recognising there are many other good reasons for presenting at a conference).
BUT . . . there were two limitations to this earlier data: the first set is restricted to the WoK, whereas Google Scholar is arguably a better reflection of ICT4D impact; the second set reflects my own publication only in social science conferences.
So the question arises: what about publishing in more technical or in multi-disciplinary conferences? This issue applies particularly for those working at the more technical end of the ICT4D field because norms in science/technology fields are very different from those in social science.
Overall, 21% of science publication is in conference proceedings; only 8% in social science1. And the importance of conferences is particularly acute in computer science. One recent estimate2 finds that “in Computer Science, proceedings volumes rather than scientific journals constitute the main channel of written communication”. Comparing conference papers and journal papers in CS, they have a roughly equal average no. of citations per paper, and in the average conference paper two-thirds of citations are to other conference papers, only one-third to journal articles. (Non-citation rates are also similar – about half of all CS conference papers and journal articles are uncited.)
So armed with this, and focusing on Google Scholar (GS) rather than WoK, I will examine the citation impact of a variety of conferences, and compare them both with each other, and (below) with ICT4D journal publication. (Table is in reverse chronological order.)
|Conference||Type||Average GS Citations Per Paper||Impact Score||Citation Score|
|IFIP WG9.4 2009||ICT4D Soc. Sci.||0.00||0.00||0.00|
|IFIP WG9.4 2007||ICT4D Soc. Sci.||1.26||0.21||0.43|
|DSA2005 (ICT4D papers only; no link to conference)||Devel. Studies||1.00||0.06||0.22|
|IFIP WG9.4 2005||ICT4D Soc. Sci||1.07||0.07||0.22|
What does this data show? (other than “not much” due to the very small sample size!)
First, that the average paper in a social science conference (whether in ICT4D or development studies or information systems) is hardly cited. This supports the data from analysis of my own ICT4D publications, suggesting very low citation impact from publishing in social science conferences.
Second, as noted above, that average citation rates in technical, computer science conferences do seem to be much, much higher.
Third, that average citation rates in multi-disciplinary conferences such as the ICTD conferences that span both the technical and the social, are somewhere in between.
Conclusions About Conferences
Before drawing some conclusions, just a reminder of what you can NOT conclude from this data. You cannot conclude that, if you present your paper in a particular conference type, you will achieve the average citation rate. The determinants of how many citations your specific paper gets are multi-factoral, include factors such as research quality, topic, timing, author identity and networks, etc. (See earlier blog entry on what constitutes good ICT4D research.)
BUT . . . one of those factors will be conference type.
So what you can use the data to help answer is this question: if I already have my paper then, citation-wise, which is the best conference outlet I can choose?
The answer appears to be: the more technical, the better.
There is even a tiny data nugget on that. Let’s compare the social science ICT4D papers that were submitted to ICTD2007 and IFIP WG9.4 2007. Prima facie, there is no clear reason for suspecting any major difference in quality – both conferences use refereeing and review processes, and there are some similarities in topics too.
The social science (IFIP) conference had an average 1.26 citations per paper (1.76 counting only those available online). The social science papers at the multi-disciplinary (ICTD) conference had an average 2.86 citations per paper. That suggests at least the possibility of a “citation uplift” effect from presenting a social science ICT4D paper at a conference with some technical papers/culture. (Melissa Ho notes it would be interesting to do a citation map to see if this occurs due to citation across disciplines.)
Conferences vs. Journals
The table below compares the two leading ICT4D conferences with the three leading ICT4D journals (data mainly from earlier blog entry on ICT4D journal ranking). The results suggest that – all other things being equal – publication of a paper in certain ICT4D conferences can be on average more impactful than publication in the leading specialist journals. But you need to pick your conferences.
And, to repeat, there may be many factors beyond citation to consider in choosing conferences, and in choosing conference vs. journal, including audience, ability to network, location, quality thresholds, etc. And the specific impact on your individual paper is uncertain.
|Outlet||Type||Average GS Citations Per Paper||Impact Score||Citation Score|
|Information Technology for Development 2008||Journal||2.85||1.35||1.58|
|Information Technologies and International Development 2008||Journal||2.79||2.08||1.55|
|Electronic Journal of Information Systems in Developing Countries 2008||Journal||1.45||1.00||0.81|
|IFIP WG9.4 2007||Conf. Soc. Sci.||1.26||0.21||0.43|
|ICTD2006 + ITID||Conf. Multi + Journal||24.4||7.88||7.88|
What about that final row? That looks at those papers (excluded from earlier ICTD2006 calculations) that were presented at a conference and then subsequently published in a journal. These were identified as the “best” papers at the conference. That will affect their citation level, and limit the conclusions one can draw. But a likely and very obvious point is that combining conference and journal publication in this way increases citation.
The Small Print
As reminder from the earlier blog entry on ICT4D journal ranking:
- Impact score = (average cites per paper*(1-((uncited-unlisted papers)/2)-unlisted papers)/average no. years since publication)*conference paper accessibility
- Citation score = average cites per paper/average no. years since publication
The raw data is here if you wish to footle around with your own calculations.
More authors means more citations3, but the variation due to author numbers is nowhere near large enough to explain the variation in citation averages seen, especially as the correlation coefficient between citations and authors is perhaps around one-third. (For 2007 conferences, CHI = 3.1 authors per paper on average; ICTD = 2.8 authors, IFIP = 2.3 authors)
Supporting the idea of different research and citation cultures: for four of the papers presented at ICTD2006 and then published in ITID journal – two technical, two social science – one could tentatively identify how many citations came from the conference paper, and how many from the journal article. For the two technical papers, 87% of the citations were from the conference paper version. For the two social science papers, 89% of the citations were from the journal article version.
There are some general caveats: some conference papers appear in various guises and more so than journal articles: as working papers, in institutional repositories, as journal articles. I’ve done all I can to eliminate this; only selecting those papers at a conference that were listed in Google Scholar in their conference guise; and ignoring them if there was any uncertainty. But, nonetheless, I still regard the conference paper citation data as less robust than that for journal papers.
Thanks to Kentaro Toyama and Melissa Ho for sparking this blog entry.
1 Bourke, P. & Butler, L. (1996) Publication types, citation rates and evaluation, Scientometrics, 37(3), 473-494
2 Moed, H.F. & Visser, M.S. (2007) Developing Bibliometric Indicators of Research Performance in Computer Science, Centre for Science and Technology Studies, Leiden University http://www.cwts.nl/pdf/NWO_Inf_Final_Report_V_210207.pdf
3 Sooryamoorthy, R. (2009) Do types of collaboration change citation?, Scientometrics, 81(1), 177-193
In earlier blog postings on the best type of publication outlet for ICT4D research and on ICT4D research quality and impact, I surmised that there is value in publishing ICT4D research in specialist ICT4D journals. But I skirted round the issue of which ICT4D journal to publish in. Here, then, is an ICT4D journal “league table”:
ICT4D Journal Impact Table
The ICT4D journals (top part of the table) were selected on the basis of all journals with titles that combine some reference to informatics/ICTs, and some reference to development (as in “international development”) or developing countries or regions dominated by developing countries.
How are the impact scores calculated? Using the formula: (average cites per paper*(1-((uncited-unlisted papers)/2)-unlisted papers)/average no. years since publication)*journal accessibility
So this is a league table of impact, based mainly on the average number of citations per paper, and calculated by taking every paper for the given year published in a journal and seeing how many citations for that paper (if any) are shown in Google Scholar, then averaging the total. The citation figure is amended to take account of the length of time since publication, and of both the proportion of papers that have no citations, and the proportion of papers that aren’t even listed on Google Scholar. Because not all impact is accounted for by citation, open access journals – which will attract many more readers – are given an additional weighting of 1.5. These calculations were undertaken for both 2005 and 2008 for the journals that go back that far, and an average of the two scores taken.
Clearly, there are subjective elements in this and you are welcome to provide a critique. Here, you can find the raw data, allowing you to create your own ranking formula. One alternative would be a simple table based just on average citation rates per year, as shown below:
ICT4D Journal Citation Table
The general conclusion is the same for both league tables: that the three journals EJISDC, ITID and ITforD are head and shoulders above the rest if you wish to publish in an ICT4D specialist journal and if you are interested in citation-related impact when publishing your ICT4D research.
But would you be better advised to publish in a journal in one of ICT4D’s “parent” disciplines? To allow a comparison, both tables include the same calculations for:
- Development studies: the top journal (World Development) and a lower-ranked journal (Journal of International Development)
- Information systems: a top journal (Information Systems Journal) and a mid-ranked journal (The Information Society)
- Technical informatics/computer science: a top journal (Human-Computer Interaction)
This suggests that these disciplinary journals have on average two-four times the impact of the ICT4D specialist journals. That average suggests – but does not prove – that your specific article will have a greater impact in these journals. In addition, academic career kudos is greater for at least the higher-end disciplinary journals. Only one ICT4D specialist journal – Information Development – is currently listed in ISI’s Web of Knowledge. One journal (AJC) gets an “A” ranking and six journals (AJICT, EJISDC, ID, ITID, IJEDUICT, SAJIM) get a “C” ranking in an Australian list (see: http://lamp.infosys.deakin.edu.au/journals/index.php?page=alljournals – a fantastic resource on ICT and information systems journals maintained by John Lamp). But that is the exception: none of the ICT4D specialist journals is ranked in the UK Association of Business Schools journal ranking list (http://www.the-abs.org.uk/), the Association for Information Systems journal ranking list (http://home.aisnet.org/displaycommon.cfm?an=1&subarticlenbr=345), or Anne-Wil Harzing’s journal quality list (http://www.harzing.com/jql.htm).
On the other hand . . .
- you will only get accepted in the higher-end mainstream journals if your paper is above a certain quality threshold (also true for ICT4D specialist journals, but with the assumption that the threshold is somewhat lower for those journals or, at least, that their rejection rate is lower; of course the quality threshold also varies between ICT4D journals just as it does between other journals);
- it may take more time and effort to get your paper accepted;
- it might reach a somewhat different audience; and
- none of these disciplinary journals is open access (figures from open-access EJISDC suggest typical rates of around 500 downloads per paper per year; figures from subscription-based ITforD suggest typical rates of around 100 accesses per paper per year)
Particularly if you add in the open access argument, and link it to the fact that ICT4D audiences (e.g. practitioners, strategists in developing countries) generally can’t access the subscription-based disciplinary journals, then there remains a logic for publishing ICT4D research in ICT4D journals.