Patterns
PerspectiveAlgorithmic injustice: a relational ethics approach
The bigger picture
Machine learning (ML) increasingly permeates every sphere of life. Complex, contextual, continually moving social and political challenges are automated and packaged as mathematical and engineering problems. Simultaneously, research on algorithmic injustice shows how ML automates and perpetuates historical, often unjust and discriminatory, patterns. The negative consequences of algorithmic systems, especially on marginalized communities, have spurred work on algorithmic fairness. Still, most of this work is narrow in scope, focusing on fine-tuning specific models, making datasets more inclusive/representative, and “debiasing” datasets. Although such work can constitute part of the remedy, a fundamentally equitable path must examine the wider picture, such as unquestioned or intuitive assumptions in datasets, current and historical injustices, and power asymmetries.
As such, this work does not offer a list of implementable solutions towards a “fair” system, but rather is a call for scholars and practitioners to critically examine the field. It is taken for granted that ML and data science are fields that solve problems using data and algorithms. Thus, challenges are often formulated as problem/solution. One of the consequences of such discourse is that challenges that refuse such a problem/solution formulation, or those with no clear “solutions”, or approaches that primarily offer critical analysis are systematically discarded and perceived as out of the scope of these fields. This work hopes for a system-wide acceptance of critical work as an essential component of AI ethics, fairness, and justice.
Data Science Maturity
keywords
Cited by (0)
Abeba Birhane (she/her) is a cognitive science PhD candidate at the Complex Software Lab at University College Dublin, Ireland. Her interdisciplinary research aims to connect the dots between complex adaptive systems, machine learning, and critical race studies. More specifically, Birhane studies how machine prediction, especially of social outcomes, is dubious and potentially harmful to vulnerable and marginalized communities.