Fairwork vis-à-vis ILO Decent Work Standards

Fairwork logoHow does the Fairwork framework of five decent work standards in the gig economy – fair pay, conditions, management, contracts, representation – compare to more conventional frameworks?

As explained in a recently-published paper, Fairwork is a simplified, revised and measurable version of the 11 elements of the International Labour Organization’s decent work agenda.

Comparing the two, as shown in the table above, Fairwork is not as comprehensive.  Some ILO elements not covered were seen as unrelated to Fairwork’s purpose.  For example, the contextual elements lie outside the control of platforms, and no evidence was found of child or forced labour.  Quantum of employment measures are not directly relevant to Fairwork’s aims though it would be informative to know if platforms are creating new work as opposed to just substituting for existing work.

While outside the scope of Fairwork’s principles, work security and flexibility were investigated via open questions in worker interviews.  Workers did raise the issues of flexibility and autonomy, as positive attributes of their gig economy work.  These supposed benefits are arguably more perceptual than real.  Hours of work are often determined by client demand and shaped by incentive payments offered by the platform to work at certain times or for certain shift lengths.  Work is recorded and managed via the app and platform to a significant extent.

In sum, the Fairwork framework covers the decent work-related issues identified in the research literature on platforms, and covers the majority of decent work elements within the ILO framework.  Its ratings could nonetheless be contextualised in a number of ways by adding in broader findings about national socio-economic context, about any creation of work and autonomy by the gig economy, about dimensions of inequality within and between gig sectors, and about longer-term job (in)security and precarity.

You can find more detail about this and other foundations for the Fairwork project in the open-access paper, “Systematic Evaluation of Gig Work Against Decent Work Standards: The Development and Application of the Fairwork Framework”; published in the journal, The Information Society.

The Rise of Digital Self-Exclusion

Digital ExclusionWhy are marginalised groups self-excluding from digital systems?

The digital exclusion problem used to be people outside the house unable to get in.  For example, the digital divide preventing groups from accessing the benefits of digital systems.

Recently, a new digital exclusion issue is arising: people deciding they’d rather stay outside the house.  Some examples . . .

1. Informal Settlement Residents

In researching for our paper, “Datafication, Development and Marginalised Urban Communities: An Applied Data Justice Framework”, my co-author Satyarupa Shekhar identified this pattern among informal settlement residents:

“businesses such as schools and pharmacies in Kibera did not wish to be [digitally] mapped.  They feared visibility to the state might lead to closure if their location became known and their informal status or activities (e.g. sales of stolen drugs) were then discovered …

… Particular settlements in Chennai refused to participate in data-gathering.  They believed that drawing attention to their existence and informal status – being under the ‘gaze of the state’ – would increase likelihood of eviction”

2. Refugees

The recent Information Technology for Development paper “Identity at the Margins” finds self-exclusion among refugees in relation to registration on UNHCR digital ID systems:

“Some participants were so concerned about the potential consequences of data sharing that they avoided registering altogether. For example, a male Syrian refugee living with his family in a one-room apartment in Lebanon told us:

Everybody was registering with the UN, but we did not. We were suspicious and scared. We don’t know if the UN shares information with anyone, so that is why I did not share many things with them.”

3. Migrants

The chapter, “The Dilemma of Undocumented Migrants Invisible to Covid-19 Counting” in recent online book “Covid-19 from the Margins” outlines the dilemma of those undocumented migrants unwilling to register with health systems despite contracting Covid, for fear of this alerting other arms of government which would then deport them.

4. LGBTQ People

The report, “Privacy, Anonymity, Visibility: Dilemmas in Tech Use by Marginalised Communities” explains how some LGBTQ people in Kenya have been unwilling to use digital systems designed to help them report discriminatory violence because of fears that their identities would become known.

Analysis

In one sense there is nothing new here.  Individuals have for centuries sought to avoid being included in government censuses and other records: to avoid tax, to avoid being conscripted for war, etc.

The difference with digital is the ease with which data can be transmitted, leading particularly to a fear that it will find its way to the agencies of state security. This fear applies not just to data collection by other state agencies but also to NGOs (who were undertaking the community mappings in the first examples) and to international organisations like UNHCR.

Whereas incorporation into historical data systems such as the census offered no individual benefit, this is not true of the digital systems cited above.  In all these cases, the marginalised are foregoing direct benefits of incorporation – better community decision-making, access to UN assistance, access to healthcare – because these benefits are outweighed by the fear of perceived harm arising from visibility to particular arms of the state.

All this in turn can be understood in terms of data justice models such as the one below from “Datafication, Development and Marginalised Urban Communities: An Applied Data Justice Framework”.  At a basic level, the perceived utility of exclusion from these digital systems outweighs the perceived benefits.  But these perception are themselves shaped by the structural and historical context:

– A lack of credible, known data rights for those in marginalised groups

– A structural relation of perceived powerlessness vis-à-vis the state

– A lack of institutions and resources with which that powerlessness could be counteracted

Unless those wider, deeper causes can be addressed, the marginalised will continue to self-exclude from digital systems.