In two previous blog entries, I noted from both general ICT4D citation data in ISI’s Web of Knowledge (WoK) and from specific citation data on my own ICT4D publications that conference papers receive by far the lowest citations per paper compared to any other type of publication (journal article, book chapter, online working paper, etc), and that some 90% are uncited.
The conclusion was that conference papers should be the lowest priority as a form of publication for those working in the ICTs-for-development field if citation impact was key (recognising there are many other good reasons for presenting at a conference).
BUT . . . there were two limitations to this earlier data: the first set is restricted to the WoK, whereas Google Scholar is arguably a better reflection of ICT4D impact; the second set reflects my own publication only in social science conferences.
So the question arises: what about publishing in more technical or in multi-disciplinary conferences? This issue applies particularly for those working at the more technical end of the ICT4D field because norms in science/technology fields are very different from those in social science.
Overall, 21% of science publication is in conference proceedings; only 8% in social science1. And the importance of conferences is particularly acute in computer science. One recent estimate2 finds that “in Computer Science, proceedings volumes rather than scientific journals constitute the main channel of written communication”. Comparing conference papers and journal papers in CS, they have a roughly equal average no. of citations per paper, and in the average conference paper two-thirds of citations are to other conference papers, only one-third to journal articles. (Non-citation rates are also similar – about half of all CS conference papers and journal articles are uncited.)
So armed with this, and focusing on Google Scholar (GS) rather than WoK, I will examine the citation impact of a variety of conferences, and compare them both with each other, and (below) with ICT4D journal publication. (Table is in reverse chronological order.)
Conference | Type | Average GS Citations Per Paper | Impact Score | Citation Score |
IFIP WG9.4 2009 | ICT4D Soc. Sci. | 0.00 | 0.00 | 0.00 |
ICTD2009 | ICT4D Multi | 0.81 | 0.65 | 0.81 |
ICTD2007 | ICT4D Multi | 6.27 | 3.56 | 2.73 |
IFIP WG9.4 2007 | ICT4D Soc. Sci. | 1.26 | 0.21 | 0.43 |
CHI2007 | Comp. Sci. | 20.39 | 7.03 | 7.03 |
ICIS2006 | Info. Systems | 1.73 | 0.39 | 0.52 |
ICTD2006 | ICT4D Multi | 13.4 | 3.43 | 3.43 |
EADI2005 | Devel. Studies | 0.06 | 0.00 | 0.01 |
DSA2005 (ICT4D papers only; no link to conference) | Devel. Studies | 1.00 | 0.06 | 0.22 |
IFIP WG9.4 2005 | ICT4D Soc. Sci | 1.07 | 0.07 | 0.22 |
What does this data show? (other than “not much” due to the very small sample size!)
First, that the average paper in a social science conference (whether in ICT4D or development studies or information systems) is hardly cited. This supports the data from analysis of my own ICT4D publications, suggesting very low citation impact from publishing in social science conferences.
Second, as noted above, that average citation rates in technical, computer science conferences do seem to be much, much higher.
Third, that average citation rates in multi-disciplinary conferences such as the ICTD conferences that span both the technical and the social, are somewhere in between.
Conclusions About Conferences
Before drawing some conclusions, just a reminder of what you can NOT conclude from this data. You cannot conclude that, if you present your paper in a particular conference type, you will achieve the average citation rate. The determinants of how many citations your specific paper gets are multi-factoral, include factors such as research quality, topic, timing, author identity and networks, etc. (See earlier blog entry on what constitutes good ICT4D research.)
BUT . . . one of those factors will be conference type.
So what you can use the data to help answer is this question: if I already have my paper then, citation-wise, which is the best conference outlet I can choose?
The answer appears to be: the more technical, the better.
There is even a tiny data nugget on that. Let’s compare the social science ICT4D papers that were submitted to ICTD2007 and IFIP WG9.4 2007. Prima facie, there is no clear reason for suspecting any major difference in quality – both conferences use refereeing and review processes, and there are some similarities in topics too.
The social science (IFIP) conference had an average 1.26 citations per paper (1.76 counting only those available online). The social science papers at the multi-disciplinary (ICTD) conference had an average 2.86 citations per paper. That suggests at least the possibility of a “citation uplift” effect from presenting a social science ICT4D paper at a conference with some technical papers/culture. (Melissa Ho notes it would be interesting to do a citation map to see if this occurs due to citation across disciplines.)
Conferences vs. Journals
The table below compares the two leading ICT4D conferences with the three leading ICT4D journals (data mainly from earlier blog entry on ICT4D journal ranking). The results suggest that – all other things being equal – publication of a paper in certain ICT4D conferences can be on average more impactful than publication in the leading specialist journals. But you need to pick your conferences.
And, to repeat, there may be many factors beyond citation to consider in choosing conferences, and in choosing conference vs. journal, including audience, ability to network, location, quality thresholds, etc. And the specific impact on your individual paper is uncertain.
Outlet | Type | Average GS Citations Per Paper | Impact Score | Citation Score |
ICTD2007 | Conf. Multi | 6.27 | 3.56 | 2.73 |
Information Technology for Development 2008 | Journal | 2.85 | 1.35 | 1.58 |
Information Technologies and International Development 2008 | Journal | 2.79 | 2.08 | 1.55 |
Electronic Journal of Information Systems in Developing Countries 2008 | Journal | 1.45 | 1.00 | 0.81 |
IFIP WG9.4 2007 | Conf. Soc. Sci. | 1.26 | 0.21 | 0.43 |
ICTD2006 + ITID | Conf. Multi + Journal | 24.4 | 7.88 | 7.88 |
What about that final row? That looks at those papers (excluded from earlier ICTD2006 calculations) that were presented at a conference and then subsequently published in a journal. These were identified as the “best” papers at the conference. That will affect their citation level, and limit the conclusions one can draw. But a likely and very obvious point is that combining conference and journal publication in this way increases citation.
The Small Print
As reminder from the earlier blog entry on ICT4D journal ranking:
- Impact score = (average cites per paper*(1-((uncited-unlisted papers)/2)-unlisted papers)/average no. years since publication)*conference paper accessibility
- Citation score = average cites per paper/average no. years since publication
The raw data is here if you wish to footle around with your own calculations.
More authors means more citations3, but the variation due to author numbers is nowhere near large enough to explain the variation in citation averages seen, especially as the correlation coefficient between citations and authors is perhaps around one-third. (For 2007 conferences, CHI = 3.1 authors per paper on average; ICTD = 2.8 authors, IFIP = 2.3 authors)
Supporting the idea of different research and citation cultures: for four of the papers presented at ICTD2006 and then published in ITID journal – two technical, two social science – one could tentatively identify how many citations came from the conference paper, and how many from the journal article. For the two technical papers, 87% of the citations were from the conference paper version. For the two social science papers, 89% of the citations were from the journal article version.
There are some general caveats: some conference papers appear in various guises and more so than journal articles: as working papers, in institutional repositories, as journal articles. I’ve done all I can to eliminate this; only selecting those papers at a conference that were listed in Google Scholar in their conference guise; and ignoring them if there was any uncertainty. But, nonetheless, I still regard the conference paper citation data as less robust than that for journal papers.
Thanks to Kentaro Toyama and Melissa Ho for sparking this blog entry.
1 Bourke, P. & Butler, L. (1996) Publication types, citation rates and evaluation, Scientometrics, 37(3), 473-494
2 Moed, H.F. & Visser, M.S. (2007) Developing Bibliometric Indicators of Research Performance in Computer Science, Centre for Science and Technology Studies, Leiden University http://www.cwts.nl/pdf/NWO_Inf_Final_Report_V_210207.pdf
3 Sooryamoorthy, R. (2009) Do types of collaboration change citation?, Scientometrics, 81(1), 177-193
I recently became aware of a Google Scholar “bug”: it takes being listed in bibliographies as citations.
That is: look, for instance, for “Bertot: Public libraries and the Internet 2008-2009: Issues, implications, and challenges”
The first result (at least the one I get) appears to be cited by four people:
http://scholar.google.com/scholar?hl=en&lr&cites=14223908505898313183
In fact, all four are my own bibliography manager. Yes, Bertot et al. actually appear there, but I’ve never used their work in any of my writings: I’m just keeping it listed.
As it happens that there are plenty of lists on the Internet, I wonder which is the bias of Google Scholar as a measure of impact/citations…
i.
PS: yes, of course, I found that looking at my server logs 😉
PS2: thanks for this series of posts!
I have been sampling some of the GS citations, and had not come across that as an issue, so a) I suspect the impact is relatively small; and b) the findings in this and other posts relate to the relative rather than absolute citation impact – so assuming any “bibliography effect” has no specific bias towards particular publication types, that would not affect the conclusions.
But maybe you should start charging authors for inclusion in your bibliographies because of this citation uplift – let us know where we should send the money!
One subsequent thought – I’ve only looked at GS “Cited by …” numbers. But one can also find in GS additional citations that are not incorporated into that number. Again, there is likely to be no systematic effect favouring particular publication types; and this may compensate for the bibliography effect.
Richard — great series of posts! This kind of data is very useful in understanding which publishing outlets are the most prestigious for ICT4D research, and therefore, which ones researchers should aim for.
The Jester has a quibble, however, with the underlying tone of these posts. They focus solely on the question of where authors should submit a paper (as if already written), with the strong implication that publication outlet is the primary factor that affects a paper’s citation scores. This confuses correlation with causation.
Undoubtedly, there is some effect of a publication outlet on a paper’s degree of citation. Stronger journals are, hopefully, read more. But, much more significant than publication outlet are factors such as quality of the paper, timeliness of the idea, degree to which the work is known through channels other than publication, etc. These things make much more of a difference on citations than where the paper ultimately is published (although, people who do these things well will also tend to submit to the higher-quality outlets, and so, again, it’s not one cause).
To see this, all it takes is to do a regression on paper citation scores using two independent variables — one for publication outlet and one for some hypothetical “paper quality” variable — and to see which explains more of the variation. The Jester doesn’t have all the data that you do, but just for example, it’s clear that the range of citations within a single ICTD conference (which ranges from 0 to 44 for 2006 papers) is dramatically greater than any “uplift” that happens between any two of the publications you mention (between 0 and 8). That is to say, something other than where the paper is published, likely having to do with something about the paper’s quality, has a much greater impact on the number of citations. Luckily, this is exactly what we’d hope to find: Paper quality is more significant than publication outlet, in determining a paper’s influence.
Another point that these posts don’t bring out is that a paper has varying chances of being accepted by different outlets. On the whole, the outlets with higher citation scores are probably pickier, and that’s much of why their citation scores are higher — they are selecting the papers more likely to be cited.
None of this changes recommendations for where your readers should *try* to publish. They should strive to publish in the outlets with the highest visibility, regardless. But, this isn’t so much because by doing so, their citation indices will go up — that is decided more by the quality of paper itself, as well as “evangelism” of the work by the researcher — it’s more because when their papers are accepted by outlets with high visibility, it’s additional confirmation that they are doing good work. (On the down side, their papers are more likely to be rejected by the higher-visilibility publications.) And, this is exactly why academic tenure decisions in many university departments are dependent on where a researcher has published papers.
The Jester