TNS
VOXPOP
What’s Slowing You Down?
What is your biggest inhibitor to shipping software faster?
Complicated codebase and technical debt.
0%
QA, writing tests, and debugging.
0%
Waiting for PR review or stakeholder approval.
0%
I'm always waiting due to long build times.
0%
Rework due to unclear or incomplete specifications.
0%
Inadequate tooling or infrastructure.
0%
Other.
0%
Tech Culture

Researchers Look at How ‘Algorithmic Coloniality’ May Hamper Artificial Intelligence

As artificial intelligence (AI) is increasingly transforms our world, a new paper suggests a way to re-examine the society we're already living in now to chart a better way forward.
Jul 19th, 2020 6:00am by
Featued image for: Researchers Look at How ‘Algorithmic Coloniality’ May Hamper Artificial Intelligence

As artificial intelligence (AI) is increasingly transforming our world, a new paper suggests a way to re-examine the society we’re already living in now to chart a better way forward. “Computer systems embody values,” explained paper co-author Shakir Mohamed, “And to build a deeper understanding of values and power is why we turn to the critical theory and especially decolonial theories.”

The paper defines “Decolonisation” as the “the intellectual, political, economic and societal work concerned with the restoration of land and life following the end of historical colonial periods,” the paper asserts. It seeks to root out the vestiges of this thinking that are still with us today, including such unhealthy traits as “Territorial appropriation, exploitation of the natural environment and of human labor, and direct control of social structures are the characteristics of historical colonialism.”

Mohamed is a research scientist in statistical machine learning and AI at DeepMind, an AI research company. He teamed up with DeepMind senior research scientist William Isaac, and with Marie-Therese Png, a Ph.D. candidate studying algorithmic coloniality at the Oxford Internet Institute. Together they’ve produced a 28-page paper exploring a role for two kinds of theories — both post-colonial and decolonial — “in understanding and shaping the ongoing advances in artificial intelligence.”

The paper includes a warning that AI systems “pose significant risks, especially to already vulnerable peoples.” But in the end, it also attempts to provide some workable solutions.

Critical Perspectives

Weapons_of_Math_Destruction book cover (via Wikipedia)

The researchers’ paper cites Cathy O’Neil’s 2016 book “Weapons of math destruction,” which argues “big data increases inequality and threatens democracy,” in high-stakes areas including policing, lending, and insurance.

For algorithmic (or automated) oppression in action, the paper points to “predictive” surveillance systems that “risk entrenching historical injustice and amplify[ing] social biases in the data used to develop them,” as well as algorithmic “decision systems” used in the U.S. criminal justice system “despite significant evidence of shortcomings, such as the linking of criminal datasets to patterns of discriminatory policing.”

Commenting on the work, VentureBeat suggests the authors “incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.”

But though it’s a very timely paper, that’s mostly a coincidence, says co-author William Isaac, a senior research scientist at DeepMind. He told me the paper had its roots in a blog post by Shakir Mohamed almost two years ago outlining some of the initial ideas, influenced by work in related areas like data colonialism. Then last year co-author Marie-Therese Png had helped organize a panel during Oxford’s Intercultural Digital Ethics Symposium, which led to the paper.

In the paper, the researchers provide a stunning example of a widely-used algorithmic screening tool for a “high-risk care management” healthcare program in 2002 which, it turned out “relied on the predictive utility of an individual’s health expenses.” The end result? Black patients were rejected for the healthcare program more often than white patients, “exacerbating structural inequities in the US healthcare system.”

The paper also looks at how algorithm-using industries and institutional actors “take advantage of (often already marginalized) people by unfair or unethical means,” including the “ghost workers” who label training data, a phenomenon which involves populations along what one researcher called “the old fault lines of colonialism.” And the paper provides examples of what it calls “clearly exploitative situations, where organizations use countries outside of their own as testing grounds — specifically because they lack pre-existing safeguards and regulations around data and its use, or because the mode of testing would violate laws in their home countries.”

They cite the example of Cambridge Analytica, which according to Nanjala Nyabola’s “Digital Democracy, Analogue Politics” beta-tested algorithms for influencing voters during elections in Kenya and Nigeria in part because those countries had weak data protection laws.

Moving Forward

So what can we do to make things better? The paper offers three ways, starting with fostering a critical technical practice of AI. Using a critical eye, teams can transform diversity from a moral imperative (or an issue of building more effective teams) into an ongoing practice through which “issues of homogenization, power, values and cultural colonialism are directly confronted. Such diversity changes the way teams and organizations think at a fundamental level.”

Secondly, they suggest pursuing “the renewal of affective and political communities.” AI “is shaped by, and shapes, the evolution of contemporary political community,” the paper warns, arguing that decolonial principles can help shape new communities. And finally, the paper also notes that colonial powers ultimately learn from the people who have been colonized, a “reverse tutelage” that the researchers hope can be identified and used to change colonial views. “Deciding what counts as valid knowledge, what is included within a dataset and what is ignored and unquestioned is a form of power held by AI researchers that cannot be left unacknowledged.”

They suggest community-engaged research and “systems of meaningful intercultural dialogue.” In 2016 the IEEE even published guidelines on ethically-aligned design.)

The paper argues that affected communities need a role in shaping AI systems, including ways to challenge its conclusions — as well as ownership of the resulting systems. There are already examples of systems showing “paternalistic thinking and imbalances in authority and choice… The decolonial imperative asks for a move from attitudes of technological benevolence and paternalism towards solidarity.”

One approach: turn to existing grass-roots groups for intercultural dialogue and examples of alternative communities, which are already active across the world. (The researchers’ examples include Data for Black Lives, the Deep Learning Indaba, Black in AI and Queer in AI.)

And AI-industry workers can make a big difference, Isaac tells me, pointing out that their paper offers “a series of tactics which researchers or teams could implement now to foster a more critical and self-reflexive approach to their work.”

“Specifically, teams could begin utilizing existing tools such as the diverse voices methodology or citizens juries to integrate feedback from impacted communities, model cards and datasheets to prompt internal reflection and accountability for data collection and model design decisions, or harms modeling to provide foresight on the potential impacts of a given project or application.

“While these tools are not comprehensive, they hopefully will encourage a shift toward a more critical and reflexive research and technology practice.”

The paper’s conclusion urges new ways to create “inclusive dialogue between stakeholders” in AI development where marginalized groups can influence decision-making while “avoiding the potential for predatory inclusion, and continued algorithmic oppression, exploitation, and dispossession.”

As Isaac reminded me, “We all have a shared responsibility within the field to ensure that present and historical biases are not exacerbated by new innovations and that there is sincere and substantive inclusion of the concerns and needs of impacted communities.”


WebReduce

Feature image by John Hain from Pixabay.

At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.