Archive

Posts Tagged ‘Data-for-Development’

Big Data and Healthcare in the Global South

The global healthcare landscape is changing. Healthcare services are becoming ever more digitised with the adoption of new technologies and electronic health records. This development typically generates enormous amounts of data which, if utilised effectively, have the potential to improve healthcare services and reduce costs.

The potential of big data in healthcare

Decision making in medicine relies heavily on data from different sources, such as research and clinical data, rather than only based on individuals’ training and professional knowledge. Historically, healthcare organisations have often based their understanding of information on an incomplete grasp of reality on the ground, which could lead to poor health outcomes. This issue has recently become more manageable with the advent of big data technologies.

Big data comprises unstructured and structured data from clinical, financial and operational systems, and data from public health records and social media that goes beyond the health organisations’ walls. Big data, therefore, can support more insightful analysis and enable evidence-based medicine by making data transparent and usable at much broader verities, much larger volumes and higher velocities than was ever available to healthcare organisations [1].

Using big data, healthcare providers can, for example, manage population health by identifying patients at high-risk during disease outbreaks and then take preventive actions. In one case, Google used data from user search histories to track the spread of influenza around the world in near real time (see figure below).

Google Flu Trends correlated with influenza outbreak[2]

Big data can also be used for identifying procedures and treatments that are costly or delivering insignificant benefits. For example, one healthcare centre in the USA has been using clinical data to bring to light costly procedures and other treatments. This helped it to reduce and identify unnecessary procedures and duplicate tests. In essence, big data not only helped to improve high standards of patient care but also helped to reduce the costs of healthcare [3].

Medical big data in the global south

The potential healthcare benefits of big data are exciting. However, it can offer the most significant potential rewards for developing countries. While global healthcare is facing challenges to improve health outcomes and to reduce costs, these issues can be severe in developing countries.

Lack of sufficient resources, poor use of existing funds, poverty, and lack of managerial and related capabilities are the main differences between developing and developed countries. This means health inequality is more pronounced in the global south. Equally, mortality and birth rates are relatively high in developing countries as compared to developed countries, which have better-resourced facilities [4].

Given improvements in the quality and quantity of clinical data, the quality of care can be improved. In the global south in particular, where health is more a question of access to primary healthcare than a question of individual lifestyle, big data can play a prominent role in improving the use of scarce resources.

How is medical big data utilised in the global south?

To investigate this key question, I analysed the introduction of Electronic Health Records (EHR), known as SEPAS, in Iranian hospitals. SEPAS is a large-scale project which aims to build a nationally integrated system of EHR for Iranian citizens. Over the last decade, Iran has progressed from having no EHR to 82% EHR coverage for its citizens [5].

EHR is one of the most widespread applications of medical big data in healthcare. In effect, SEPAS is built with the aim to harness data and extract value from it and to make real-time and patient-centred information available to authorised users.

However, the analysis of SEPAS revealed that medical big data is not utilised to its full potential in the Iranian healthcare industry. If the big data system is to be successful, the harnessed data should inform decision-making processes and drive actionable results.

Currently, data is gathered effectively in Iranian public hospitals, meaning that the raw and unstructured data is mined and classified to create a clean set of data ready for analysis. This data is also transferred into summarised and digestible information and reports, confirming that real potential value can be extracted from the data.

In spite of this, the benefit of big data is not yet realised in guiding clinical decisions and actions in Iranian healthcare. SEPAS is only being used in hospitals by IT staff and health information managers who work with data and see the reports from the system. However, the reports and insights are not often sent to clinicians and little effort is made by management to extract lessons from some potentially important streams of big data.

Limited utilisation of medical big data in developing countries has also been reported in other studies. For example, a recent study in Saudi Arabia [6] reported the low number of e-health initiatives. This suggests the utilisation of big data faces more challenges in these countries.

Although this study cannot claim to have given a complete picture of the utilisation of medical big data in the global south, some light has been shed on the topic. While there is no doubt that medical big data could have a significant impact on the improvement of healthcare in the global south, there is still much work to be done. Healthcare policymakers in developing countries, and in Iran in particular, need to reinforce the importance of medical big data in hospitals and ensure that it is embedded in practice. To do this, the barriers to effective datafication should be first investigated in this context.

References

[1] Kuo, M.H., Sahama, T., Kushniruk, A.W., Borycki, E.M. and Grunwell, D.K. (2014). Health big data analytics: current perspectives, challenges and potential solutions. International Journal of Big Data Intelligence, 1(1-2), 114-126.

[2] Dugas, A.F., Hsieh, Y.H., Levin, S.R., Pines, J.M., Mareiniss, D.P., Mohareb, A., Gaydos, C.A., Perl, T.M. and Rothman, R.E. (2012). Google Flu Trends: correlation with emergency department influenza rates and crowding metrics. Clinical infectious diseases, 54(4), 463-469.

[3] Allouche G. (2013). Can Big Data Save Health Care? Available at: https://www.techopedia.com/2/29792/trends/big-data/can-big-data-save-health-care (Accessed: August 2018).

[4] Shah A. (2011). Healthcare around the World. Global Issues. Available at: http://www.globalissues.org/article/774/health-care-around-the-world (Accessed: August 2018).

[5] Financial Tribune (2017). E-Health File for 66m Iranians. Available at: https://financialtribune.com/articles/people/64502/e-health-files-for-66m-iranians (Accessed: August 2018).

[6] Alsulame K, Khalifa M, Househ M. (2016). E-Health Status in Saudi Arabia: A Review of Current Literature. Health Policy and Technology, 5(2), 204-210.

Advertisements

Measuring the Big Data Knowledge Divide Using Wikipedia

Big data is of increasing importance; yet – like all digital technologies – it is affected by a digital divide of multiple dimensions. We set out to understand one dimension: the big data ‘knowledge divide’; meaning the way in which different groups have different levels of knowledge about big data [1,2].

To do this, we analysed Wikipedia – as a global repository of knowledge – and asked: how does people’s knowledge of big data differ by language?

An exploratory analysis of Wikipedia to understand the knowledge divide looked at differences across ten languages in production and consumption of the specific Wikipedia article entitled ‘Big Data’ in each of the languages. The figure below shows initial results:

  • The Knowledge-Awareness Indicator (KAI) measures the total number of views of the ‘Big Data’ article divided by total number of views of all articles for each language (multiplied by 100,000 to produce an easier-to-grasp number). This relates specifically to the time period 1 February – 30 April 2018.
  • ‘Total Articles’ measures the overall number of articles on all topics that were available for each language at the end of April 2018, to give a sense of the volume of language-specific material available on Wikipedia.

‘Big Data’ article knowledge-awareness, top-ten languages*

ko=Korean; zh=Chinese; fr=French; pt=Portuguese; es=Spanish; de=German; it=Italian; ru=Russian; en=English; ja=Japanese.
Note: Data analysed for 46 languages, 1 February to 30 April 2018.
* Figure shows the top-ten languages with the most views of the ‘Big Data’ article in this period.
Source: Author using data from the Wikimedia Toolforge team [3]

 

Production. Considering that Wikipedia is built as a collaborative project, the production of content and its evolution can be used as a proxy for knowledge. A divide relating to the creation of content for the ‘Big Data’ article can be measured using two indicators. First, article size in bytes: longer articles would tend to represent the curation of more knowledge. Second, number of edits: seen as representing the pace at which knowledge is changing. Larger article size and higher number of edits may allow readers to have greater and more current knowledge about big data. On this basis, we see English far ahead of other languages: articles are significantly longer and significantly more edited.

Consumption. The KAI provides a measure of the level of relative interest in accessing the ‘Big Data’ article which will also relate to level of awareness of big data. Where English was the production outlier, Korean and to a lesser extent Chinese are the consumption outliers: there appears to be significantly more relative accessing of the article on ‘Big Data’ in those languages than in others. This suggests a greater interest in and awareness of big data among readers using those languages. Assuming that accessed articles are read and understood, the KAI might also be a proxy for the readers’ level of knowledge about big data.

We can draw two types of conclusion from this work.

First, and addressing the specific research question, we see important differences between language groups; reflecting an important knowledge divide around big data. On the production side, much more is being written and updated in English about big data than in other languages; potentially hampering non-English speakers from engaging with big data; at least in relative terms. This suggests value in encouraging not just more non-English Wikipedia writing on big data, but also non-English research (and/or translation of English research) given research feeds Wikipedia writing. This value may be especially notable in relation to East Asian languages given that, on the consumption side, we found much greater relative interest and awareness of big data among Wikipedia readers.

Second, and methodologically, we can see the value of using Wikipedia to analyse knowledge divide questions. It provides a reliable source of openly-accessible, large-scale data that can be used to generate indicators that are replicable and stable over time.

This research project will continue exploring the use of Wikipedia at the country level to measure and understand the digital divide in the production and consumption of knowledge, focusing specifically on materials in Spanish.

References

[1] Andrejevic, M. (2014). ‘Big Data, Big Questions |The Big Data Divide.’ International Journal of Communication, 8.

[2] Michael, M., & Lupton, D. (2015). ‘Toward a Manifesto for the “Public Understanding of Big Data”.’ Public Understanding of Science, 25(1), 104–116. doi: 10.1177/0963662515609005

[3] Wikimedia Toolforge (2018). Available at: https://tools.wmflabs.org/

Social Media Analytics for Better Understanding of the Digital Gig Economy

27 April 2018 1 comment

Owing to the proliferation of digital platforms facilitating online freelance work such as Upwork, Fiverr and Amazon Mechanical Turk, the number of digital gig workers has been continuously increasing worldwide. In 2015, there were as many as 48 million digital gig workers [1]; between 2016 and 2017, a 25% increase in the number of such workers was reported [2].

Digital gig work is indeed attractive to many, with a number of benefits that such independent workers are perceived to enjoy, e.g., flexible working hours, reduced transportation costs, wide range of projects to choose from. However, there exist potentially distressing issues, e.g., lack of job security, tough competition, substandard wages, which are especially pronounced in developing country settings [3]. Whereas traditional media such as news were unable to pinpoint or bring attention to these concerns, social media analysis–done manually by Cision in 2017–provided a window to the thoughts of independent workers which led to the fine-grained identification of issues that they are faced with [4].

As part of the currently ongoing Social Media Analytics Research and Teaching @ Manchester (SMART@Manchester) project funded by the University of Manchester Research Institute (UMRI), we aim to automatically gain insight into people’s perceptions of digital gig work, based on their posts on social media platforms such as Twitter and Facebook, as well as on review sites such as Glassdoor.

Specifically, we wish to test the currently prevailing assumption that digital gig work is experienced differently in the Global South compared to the Global North. Workers tend to make comparisons with their local benchmarks (i.e., office-based work), and it is believed possible that in the Global North, digital gig work is worse than prevailing benchmarks, whereas in the Global South it is better.

The following are some of the research questions that will be addressed as part of this case study.

  1. How do digital gig workers feel about their jobs?
  2. Which topics pertaining to decent work standards do they frequently talk about?
  3. Are there any differences—in terms of sentiments and topics—across different geographic locations, or across genders?

The first question can be answered by opinion mining while the second is addressable by topic identification. To determine whether there are differences with respect to opinions and topics, between the Global North and South or between genders, results from opinion mining and topic identification need to be combined with social media content metadata (e.g., geographic locations). 

In the way of opinion mining, we are currently investigating the use of an automatic emotion identification tool called Illuemotion which was developed by University of Manchester final-year Computer Science student, Elitsa Dimova. The web-based tool, a screenshot of which is provided below, is underpinned by a neural network model that analyses tweets to determine the most dominant emotions expressed, which can be any of anger, fear, joy, love, sadness, surprise and thankfulness.

The image below shows one of the tweets directly fetched by the tool from Twitter (via their API) when supplied with “#upwork” as input query. The tweet, which speaks of hidden dangers of being a digital gig worker, was detected by Illuemotion as expressing sadness and fear. One of our next steps is to apply the tool on a collection of thousands of tweets to allow us to analyse them across different geographic regions as well as genders.

As we are analysing data that pertains to human emotion, ethical considerations are being taken into account, especially bearing in mind that we also do not wish to compromise any of the digital gig workers who are social media users. For example, many Twitter users are unaware that what they post publicly can be used to identify or (reverse) look them up. They also have a right to be forgotten (i.e., they can delete their posts as well as their accounts). Overall what this means for us researchers who make use of their data is that in scholarly publications, we should provide only aggregated results and ensure that we do not include any identifiable information. These and other ethical considerations were discussed in detail in the recently concluded symposium in the Academy of Management Specialised Conference on Big Data entitled, “Ethical and Methodological Considerations for Management Research in the Digital Economy” held at the University of Surrey from the 18-20th April.

As well as two other SMART@Manchester case studies, the above described research questions on perceptions of digital gig work and our proposed approaches will be presented in the upcoming 4th International Workshop on Social Media World Sensors (Sideways 2018) co-located with the 15th European Semantic Web Conference to be held in Heraklion, Crete, Greece from the 3rd-7th June.

References:

[1] Kuek, S.C. et al. (2015) The Global Opportunity in Online Outsourcing. World Bank, Washington, DC. Available at: http://documents.worldbank.org/curated/en/138371468000900555/The-global-opportunity-in-online-outsourcing

[2] Lehdonvirta, V. (2017) The online gig economy grew 26% over the past year, The iLabour Project, Oxford Internet Institute. Available at: http://ilabour.oii.ox.ac.uk/the-online-gig-economy-grew-26-over-the-past-year/

[3] Heeks, R. (2017) Decent Work and the Digital Gig Economy: A Developing Country Perspective on Employment Impacts and Standards in Online Outsourcing, Crowdwork, etc, Centre for Development Informatics, Global Development Institute, University of Manchester. Available at: http://hummedia.manchester.ac.uk/institutes/gdi/publications/workingpapers/di/di_wp71.pdf

[4] Rubec, J. (2017) Study: The Dark Side of the Gig Economy, Cision. Available at: https://www.cision.com/us/2016/12/the-dark-side-of-the-gig-economy/

Big Data and Urban Transportation in India

12 February 2018 Leave a comment

What effect are big data systems having on urban transportation?

To investigate this, the Centre for Internet and Society was commissioned by the Universities of Manchester and Sheffield, to conduct a study of the big data system recently implemented by the Bengaluru Metropolitan Transport Corporation (BMTC).  The “Intelligent Transport System” (ITS) took three years to reach initial operational status in 2016, and now covers the more than five million daily passenger journeys undertaken on BMTC’s 6,400 buses.

ITS (see figure below) processes many gigabytes of data per day via three main components: vehicle tracking units that continuously transmit bus locations using the mobile cell network; online electronic ticketing machines that capture details of all ticketing transactions; and a passenger information system with linked mobile app to provide details such as bus locations, routes and arrival times.

ITS Architecture (Mishra 2016)[1]

At the operational level the system is functioning moderately well: the data capture and transmission components mainly work though with some malfunctions; and the passenger-facing components are present but have data and functionality challenges that still need to be fully worked-through.  Higher-level use of big data for tactical and strategic decision-making – optimising routes, reducing staff numbers, increasing operational efficiency – is intended, but not yet evidenced.

Just over a year since full roll-out, this is not unexpected but it is a reminder that big data systems take many years to implement: in this case, at least four years to get the operational functions working, and years more to integrate big data into managerial decision-making.

Nonetheless some broader impacts can already be seen.  Big data has changed the mental model – the “imaginary” – that managers and politicians have of bus transport in Bengaluru.  Where daily operations of the bus fleet and bus crews were largely opaque to management prior to ITS, now they are increasingly visible.  Big data is thus changing the landscape of what is seen to be possible within the organisation, and has already resulted in plans for driver-only buses, and a restructuring that is removing middle management from the organisation: a layer no longer required when big data puts central management in direct contact with the operational front line.

Big data is also leading to shifts in power.  Some of these are tentative: a greater transparency of operations to the general public and civil society that may receive a step change once ITS data is openly shared.  Others are more concrete: big data is shifting power upwards in the organisation – away from front-line labour, and away from middle managers towards those in central management who have the capabilities to control and use the new data streams.

For further details of this study, see Development Informatics working paper no.72: “Big Data and Urban Transportation in India: A Bengaluru Bus Corporation Case Study”.

[1] Mishra, B. (2016) Intelligent Transport System (ITS), presentation at workshop on Smart Mobility for Bengaluru, Bengaluru, 10 Jun https://www.slideshare.net/EMBARQNetwork/bmtc-intelligent-transport-system

How Big Data Changes the Locus of Organisational Power

19 September 2017 Leave a comment

Big data can lead centres of power in organisations to move.  Recent research on this was undertaken in an Indian state electricity corporation (“Stelcorp”), reported in the paper, “Exploring Big Data for Development: An Electricity Sector Case Study from India”.

This found three shifts to be occurring, as illustrated.

Power Shifts Associated with Big Data

1. From Public to Private. Previously, Stelcorp was responsible for its own data and data systems. As a result of sectoral reforms, private firm “Digicorp” was brought in.  While de jure control remains with Stelcorp, de facto control has shifted to Digicorp.  Digicorp controls knowledge of the design, construction, operation and maintenance of the data systems; it operates those systems; and Stelcorp staff have been reduced to a clerical service role.  In theory, Digicorp could be replaced.  But as seen in other public-private partnerships, in practice there is an asymmetry of power and dependency that has locked in the private partner.

2. From Workers to Managers. With the introduction of online meters for bulk and urban electricity consumers, requirement for human meter-readers has fallen. As a result, during 2013-2016, 40% of meter-readers lost their jobs.  For the remainder, the writing is on the wall – online metering will spread throughout the rest of the electricity network, and their jobs will slowly but steadily be automated out of existence.  For those that remain, the data they collect is less critical than previously, as it forms a declining proportion of all meter data; and they have less control when reduced to just capturing data on hand-held devices (they barely own and access this data and do not use or regulate it).  As a result, Stelcorp managers are decreasingly resource-dependent on the meter-readers and power has shifted away from the latter and towards the former.

3. From Local to Central Managers. The advent of big data led to creation of a central Finance and Energy Management Unit (FEMU). Previously, managers at divisional and zonal levels would be accountable to their immediate superiors, typically within that level: it was those superiors who collected performance data on the managers and negotiated its implications, and those superiors who held power over their junior managers.  Data was relatively “sticky”; tending to be restricted to localised enclaves within the organisation.  This is no longer the case.

Now, all forms of data flow readily to FEMU.  It sees all that goes on within Stelcorp (at least to the extent reflected by current online data) and is able to drill down through zonal and divisional data to individual assets.  It holds regular performance meetings with Stelcorp managers, and has introduced more of an audit and performance management culture.  As a result, managers now largely see themselves as accountable to FEMU.

For further details, including the models of resource dependency and data-related power that underpin this analysis, please refer to the working paper on this topic.

Big Data and Electoral Politics in India

What happens when big data and big politics collide?  One answer arises from a recent study of big data in the electricity distribution sector in an Indian state: “Exploring Big Data for Development: An Electricity Sector Case Study from India”.

[1]

The state electricity corporation has introduced millions of online digital meters that measure electricity flow along the distribution network and down to the level of consumers.  Producing a large stream of real-time data, these innovations should have addressed a critical problem in India: theft / non-payment by consumers which creates losses up to one-third of all supplied power.  But they did not.  Why should that be?

Big data does reduce some losses: technical losses from electrical resistance and faults are down; payment losses from urban consumers are down.  But the big data era has seen an unprecedented expansion of rural electrification, and in rural areas, payment losses have risen to 50% or more.  In other words, the corporation receives less than half the revenue it should given the electricity it is supplying to rural areas.

The expansion in rural electrification has been mandated by politicians.  The high level of rural payment losses has been condoned by politicians, given significant positive association between levels of electricity non-payment and likelihood of seat retention at an election.

Is this the silencing of big data in the face of big politics: the capability for accurate metering and billing of almost all consumers simply being overridden by electoral imperatives?  Not quite, because big data has been involved via an offsetting effect, and an epistemic effect.

  1. Offsetting Effect. Big data-driven technical and urban consumer loss reductions have allowed the State Government to “get away” with its political approach to rural electrification. The two effects of technical/urban loss reduction and political loss increase have roughly balanced one another out; a disappointing aggregate outcome but one that just falls under the threshold that would trigger some direct intervention by the regulators or by Central Government.
  2. Epistemic Effect. Big data creates a separate virtual model of phemonena: a so-called “data double”. This in turn can alter the “imaginaries” of those involved – the mental model and worldviews they have about the phenomena – and the wider discourse about the phenomena.
    This has happened in India.  Big data has created a new imaginary for electricity, particularly within the minds of politicians.  Before big data, the policy paradigm was one that saw electricity in terms of constraint: geographic constraint such that not all areas could be connected, and supply constraint such that “load-shedding” – regular blackouts and brownouts – was regarded as integral.
    After big data, the new paradigm is one of continuous, high-quality, universal electricity.  Plans and promises are now based on the idea that all districts – and all voters – can have 24 x 7 power.

In sum, one thing we know of digital systems is that they have unanticipated consequences.  This has been true of big data in this Indian state.  Far from reducing losses, the data-enabled growth in electricity connectivity has helped fuel a politically-enabled growth in free appropriation of electricity.

For further details, please refer to the working paper on this topic.

[1] Credit: Jorge Royan (Own work) CC-BY-SA-3.0, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:India_-_Kolkata_electricity_meters_-_3832.jpg

The Affordances and Impacts of Data-Intensive Development

What is special about “data-intensive development”: the growing presence and application of data in the processes of international development?

We can identify three levels of understanding: qualities, affordances, and development impacts.

A. Data Qualities

Overused they may be but it still helps to recall the 3Vs.  Data-intensive development is based on a greater volume, velocity and variety of data than previously seen.  These are the core differentiating qualities of data from which affordances and impacts flow.

B. Data Affordances

The qualities are inherent functionalities of data.  From these qualities, combined with purposive use by individuals or organisations, the following affordances emerge[1]:

  • Datafication: an expansion of the phenomena about which data are held. A greater breadth: holding data about more things. A greater depth: holding more data about things.  And a greater granularity: holding more detailed data about things.  This is accelerated by the second affordance . . .
  • Digitisation: not just the conversion of analogue to digital data but the same conversion for all parts of the information value chain. Data processing and visualisation for development becomes digital; through growth of algorithms, development decision-making becomes digital; through growth of automation and smart technology, development action becomes digital.  Digitisation means dematerialisation of data (its separation from physical media) and liquification of data (its consequent fluidity of movement across media and networks), which underlie the third affordance . . .
  • Generativity: the use of data in ways not planned at the origination of the data. In particular, data’s reprogrammability (i.e. using data gathered for one purpose for a different purpose); and data’s recombinability (i.e. mashing up different sets of data to get additional, unplanned value from their intersection).

C. Data-Intensive Development Impacts

In turn, these affordances give rise to development impacts.  There are many ways in which these could be described, with much written about the (claimed) positive impacts.  Here I use a more critical eye to select four that can be connected to the concept of data (in)justice for development[2]:

i. (In)Visibility. The affordances of data create a far greater visibility for those development entities – people, organisations, processes, things, etc. – about which data is captured. They can more readily be part of development activity and decision making.  And they can also suffer loss of privacy and growth in surveillance from the state and private sector[3].

Conversely, those entities not represented in digital data suffer greater invisibility, as they are thrown further into shadow and exclusion from development decision-making.

Dematerialisation and generativity also make the whole information value chain increasingly invisible.  Data is gathered without leaving a physical trace.  Data is processed and decisions are made by algorithms whose code is not subject to external scrutiny.  The values, assumptions and biases inscribed into data, code and algorithms are unseen.

ii. Abstraction. A shift from primacy of the physical representation of development entities to their abstract representation: what Taylor & Broeders (2015) call the “data doubles” of entities, and the “shadow maps” of physical geographies. This abstraction typically represents a shift from qualitative to quantitative representation (and a shift in visibility from the physical to the abstract; from the real thing to its data imaginary).

iii. Determinism.  Often thought of in terms of solutionism: the growing use of data- and technology-driven approaches to development.  Alongside this growth in technological determinism of development, there is an epistemic determinism that sidelines one type of knowledge (messy, local, subjective) in favour of a different type of knowledge (remote, calculable and claiming-to-be-but-resolutely-not objective).  We could also identify the algorithmic determinism that increasingly shapes development decisions.

iv. (Dis)Empowerment. As the affordances of data change the information value chain, they facilitate change in the bases of power. Those who own and control the data, information, knowledge, decisions and actions of the new data-intensive value chains – including its code, visualisations, abstractions, algorithms, terminologies, capabilities, etc – are gaining in power.  Those who do not are losing power in relative terms.

D. Review

The idea of functionalities leading to affordances leading to impacts is too data-deterministic.  These impacts are not written, and they will vary through the different structural inscriptions imprinted into data systems, and through the space for agency that new technologies always permit in international development.  Equally, though, we should avoid social determinism.  The technology of data systems is altering the landscape of international development.  Just as ICT4D research and practice must embrace the affordances of its digital technologies, so data-intensive development must do likewise.

[1] Developed from: Lycett, M. (2013) ‘Datafication’: making sense of (big) data in a complex world. European Journal of Information Systems, 22(4), 381-386; Nambisan, S. (2016) Digital entrepreneurship: toward a digital technology perspective of entrepreneurship, Entrepreneurship Theory and Practice, advance online publication

[2] Developed from: Johnson, J.A. (2014) From open data to information justice. Ethics And Information Technology, 16(4), 263-274; Taylor, L. & Broeders, D. (2015) In the name of development: power, profit and the datafication of the global South. Geoforum, 64, 229-237; Sengupta, R., Heeks, R., Chattapadhyay, S. & Foster, C. (2017) Exploring Big Data for Development: An Electricity Sector Case Study from India, GDI Development Informatics Working Paper no.66, University of Manchester, UK; Shaw, J. & Graham, M. (2017) An informational right to the city? Code, content, control, and the urbanization of information. Antipode, advance online publication http://onlinelibrary.wiley.com/doi/10.1111/anti.12312/full; Taylor, L. (2017) What Is Data Justice? The Case for Connecting Digital Rights and Freedoms on the Global Level, TILT, Tilburg University, Netherlands  http://dx.doi.org/10.2139/ssrn.2918779

[3] What Taylor & Broeders (2015) not entirely convincingly argue is a change from overt and consensual “legibility” to tacit and contentious “visibility” of citizens (who now morph into data subjects).

 

%d bloggers like this: