Skip to Content

Artificial intelligence is creating a new colonial world order

An MIT Technology Review series investigates how AI is enriching a powerful few by dispossessing communities that have been dispossessed before.

chasm concept
Edel Rodriguez
April 19, 2022

This story is the introduction to MIT Technology Review’s series on AI colonialism, which was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. Read the full series here.

My husband and I love to eat and to learn about history. So shortly after we married, we chose to honeymoon along the southern coast of Spain. The region, historically ruled by Greeks, Romans, Muslims, and Christians in turn, is famed for its stunning architecture and rich fusion of cuisines.

Little did I know how much this personal trip would intersect with my reporting. Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor.

I had already begun to investigate these claims when my husband and I began to journey through Seville, Córdoba, Granada, and Barcelona. As I simultaneously read The Costs of Connection, one of the foundational texts that first proposed a “data colonialism,” I realized that these cities were the birthplaces of European colonialism—cities through which Christopher Columbus traveled as he voyaged back and forth to the Americas, and through which the Spanish crown transformed the world order.

In Barcelona especially, physical remnants of this past abound. The city is known for its Catalan modernism, an iconic aesthetic popularized by Antoni Gaudí, the mastermind behind the Sagrada Familia. The architectural movement was born in part from the investments of wealthy Spanish families who amassed riches from their colonial businesses and funneled the money into lavish mansions.

One of the most famous, known as the Casa Lleó Morera, was built early in the 20th century with profits made from the sugar trade in Puerto Rico. While tourists from around the world today visit the mansion for its beauty, Puerto Rico still suffers from food insecurity because for so long its fertile land produced cash crops for Spanish merchants instead of sustenance for the local people.

As we stood in front of the intricately carved façade, which features flora, mythical creatures, and four women holding the four greatest inventions of the time (a lightbulb, a telephone, a gramophone, and a camera), I could see the parallels between this embodiment of colonial extraction and global AI development.

The AI industry does not seek to capture land as the conquistadors of the Caribbean and Latin America did, but the same desire for profit drives it to expand its reach. The more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources—data—it can harvest from their activities, their movements, and even their bodies.

Neither does the industry still exploit labor through mass-scale slavery, which necessitated the propagation of racist beliefs that dehumanized entire populations. But it has developed new ways of exploiting cheap and precarious labor, often in the Global South, shaped by implicit ideas that such populations don’t need—or are less deserving of—livable wages and economic stability.

MIT Technology Review's new AI Colonialism series digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Māori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way.

Now a new generation of scholars is championing a “decolonial AI” to return power from the Global North back to the Global South, from Silicon Valley back to the people. My hope is that this series can provide a prompt for what “decolonial AI” might look like—and an invitation, because there’s so much more to explore.

Read MIT Technology Review's series on AI Colonialism here.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.