Patterns
Volume 2, Issue 2, 12 February 2021, 100205
Journal home page for Patterns

Perspective
Algorithmic injustice: a relational ethics approach

https://doi.org/10.1016/j.patter.2021.100205Get rights and content
Under a Creative Commons license
open access

The bigger picture

Machine learning (ML) increasingly permeates every sphere of life. Complex, contextual, continually moving social and political challenges are automated and packaged as mathematical and engineering problems. Simultaneously, research on algorithmic injustice shows how ML automates and perpetuates historical, often unjust and discriminatory, patterns. The negative consequences of algorithmic systems, especially on marginalized communities, have spurred work on algorithmic fairness. Still, most of this work is narrow in scope, focusing on fine-tuning specific models, making datasets more inclusive/representative, and “debiasing” datasets. Although such work can constitute part of the remedy, a fundamentally equitable path must examine the wider picture, such as unquestioned or intuitive assumptions in datasets, current and historical injustices, and power asymmetries.

As such, this work does not offer a list of implementable solutions towards a “fair” system, but rather is a call for scholars and practitioners to critically examine the field. It is taken for granted that ML and data science are fields that solve problems using data and algorithms. Thus, challenges are often formulated as problem/solution. One of the consequences of such discourse is that challenges that refuse such a problem/solution formulation, or those with no clear “solutions”, or approaches that primarily offer critical analysis are systematically discarded and perceived as out of the scope of these fields. This work hopes for a system-wide acceptance of critical work as an essential component of AI ethics, fairness, and justice.

Summary

It has become trivial to point out that algorithmic systems increasingly pervade the social sphere. Improved efficiency—the hallmark of these systems—drives their mass integration into day-to-day life. However, as a robust body of research in the area of algorithmic injustice shows, algorithmic systems, especially when used to sort and predict social outcomes, are not only inadequate but also perpetuate harm. In particular, a persistent and recurrent trend within the literature indicates that society's most vulnerable are disproportionally impacted. When algorithmic injustice and harm are brought to the fore, most of the solutions on offer (1) revolve around technical solutions and (2) do not center disproportionally impacted communities. This paper proposes a fundamental shift—from rational to relational—in thinking about personhood, data, justice, and everything in between, and places ethics as something that goes above and beyond technical solutions. Outlining the idea of ethics built on the foundations of relationality, this paper calls for a rethinking of justice and ethics as a set of broad, contingent, and fluid concepts and down-to-earth practices that are best viewed as a habit and not a mere methodology for data science. As such, this paper mainly offers critical examinations and reflection and not “solutions.”

Data Science Maturity

DSML 1: Concept: Basic principles of a new data science output observed and reported

keywords

justice
ethics
Afro-feminism
relational epistemology
data science
complex systems
enaction
embodiment
artificial intelligence
machine learning

Cited by (0)

Abeba Birhane (she/her) is a cognitive science PhD candidate at the Complex Software Lab at University College Dublin, Ireland. Her interdisciplinary research aims to connect the dots between complex adaptive systems, machine learning, and critical race studies. More specifically, Birhane studies how machine prediction, especially of social outcomes, is dubious and potentially harmful to vulnerable and marginalized communities.