Skip to main contentSkip to navigationSkip to navigation
‘California recently passed the Silenced No More Act, making it illegal to silence workers from speaking out about racism, harassment and other forms of abuse in the workplace.’
‘California recently passed the Silenced No More Act, making it illegal to silence workers from speaking out about racism, harassment and other forms of abuse in the workplace.’ Photograph: Marcio José Sánchez/AP
‘California recently passed the Silenced No More Act, making it illegal to silence workers from speaking out about racism, harassment and other forms of abuse in the workplace.’ Photograph: Marcio José Sánchez/AP

For truly ethical AI, its research must be independent from big tech

This article is more than 2 years old
Timnit Gebru

We must curb the power of Silicon Valley and protect those who speak up about the harms of AI

A year ago I found out, from one of my direct reports, that I had apparently resigned. I had just been fired from Google in one of the most disrespectful ways I could imagine.

Thanks to organizing done by former and current Google employees and many others, Google did not succeed in smearing my work or reputation, although they tried. My firing made headlines because of the worker organizing that has been building up in the tech world, often due to the labor of people who are already marginalized, many of whose names we do not know. Since I was fired last December, there have been many developments in tech worker organizing and whistleblowing. The most publicized of these was Frances Haugen’s testimony in Congress; echoing what Sophie Zhang, a data scientist fired from Facebook, had previously said, Haugen argued that the company prioritizes growth over all else, even when it knows the deadly consequences of doing so.

I’ve seen this happen firsthand. On 3 November 2020, a war broke out in Ethiopia, the country I was born and raised in. The immediate effects of unchecked misinformation, hate speech and “alternative facts” on social media have been devastating. On 30 October of this year, I and many others reported a clear genocidal call in Amharic to Facebook. The company responded by saying that the post did not violate its policies. Only after many reporters asked the company why this clear call to genocide didn’t violate Facebook’s policies – and only after the post had already been shared, liked and commented on by many – did the company remove it.

Other platforms like YouTube have not received the scrutiny they warrant, despite studies and articles showing examples of how they are used by various groups, including regimes, to harass citizens. Twitter and especially TikTok, Telegram and Clubhouse have the same issues but are discussed much less. When I wrote a paper outlining the harms posed by models trained using data from these platforms, I was fired by Google.

When people ask what regulations need to be in place to safeguard us from the unsafe uses of AI we’ve been seeing, I always start with labor protections and antitrust measures. I can tell that some people find that answer disappointing – perhaps because they expect me to mention regulations specific to the technology itself. While those are important, the #1 thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it and increasing the power of those who speak up against the harms of AI and these companies’ practices. Thanks to the hard work of Ifeoma Ozoma and her collaborators, California recently passed the Silenced No More Act, making it illegal to silence workers from speaking out about racism, harassment and other forms of abuse in the workplace. This needs to be universal. In addition, we need much stronger punishment of companies that break already existing laws, such as the aggressive union busting by Amazon. When workers have power, it creates a layer of checks and balances on the tech billionaires whose whim-driven decisions increasingly affect the entire world.

I see this monopoly outside big tech as well. I recently launched an AI research institute that hopes to operate under incentives that are different from those of big tech companies and the elite academic institutions that feed them. During this endeavor, I noticed that the same big tech leaders who push out people like me are also the leaders who control big philanthropy and the government’s agenda for the future of AI research. If I speak up and antagonize a potential funder, it is not only my job on the line, but the jobs of others at the institute. And although there are some – albeit inadequate – laws that attempt to protect worker organizing, there is no such thing in the fundraising world.

So what is the way forward? In order to truly have checks and balances, we should not have the same people setting the agendas of big tech, research, government and the non-profit sector. We need alternatives. We need governments around the world to invest in communities building technology that genuinely benefits them, rather than pursuing an agenda that is set by big tech or the military. Contrary to big tech executives’ cold-war style rhetoric about an arms race, what truly stifles innovation is the current arrangement where a few people build harmful technology and others constantly work to prevent harm, unable to find the time, space or resources to implement their own vision of the future.

We need an independent source of government funding to nourish independent AI research institutes that can be alternatives to the hugely concentrated power of a few large tech companies and the elite universities closely intertwined with them. Only when we change the incentive structure will we see technology that prioritizes the wellbeing of citizens – rather than a continued race to figure out how to kill more people more efficiently, or make the most amount of money for a handful of corporations around the world.

  • Timnit Gebru is the founder and executive director of the Distributed AI Research Institute (DAIR). She was formerly co-lead of Google’s Ethical AI team

Most viewed

Most viewed