Skip to main content

AI Weekly: A deep learning pioneer’s teachable moment on AI bias

Facebook chief AI scientist Yann LeCun speaks with Cade Metz at a 2016 Wired event
Facebook chief AI scientist Yann LeCun speaks with Cade Metz at a 2016 Wired event
Image Credit: Brian Ach/Getty Images

I’ve lost track of the number of times I’ve heard someone say Timnit Gebru is saving the world recently. Her co-lead of AI ethics at Google, Margaret Mitchell, said it a few days ago when Gebru led events around race at Google. Gebru’s work with Joy Buolamwini demonstrating race and gender bias in facial recognition is one of the reasons lawmakers in Congress want to prohibit federal government use of the technology. That landmark work also played a major role in Amazon, IBM, and Microsoft agreeing to halt or end facial recognition sales to police.

Earlier this week, organizers of the Computer Vision and Pattern Recognition (CVPR) conference, one of the biggest AI research events in the world, took the unusual step of calling Gebru’s CVPR tutorial illustrating how bias in AI goes far beyond data “required viewing for us all.”

That’s what made the situation with Facebook chief AI scientist Yann LeCun this week so perplexing.

The entire episode between two of the best-known AI researchers in the world started about a week ago with the release of PULSE, a computer vision model created by Duke University researchers that claims it can generate realistic, high-resolution images of people from a pixelated photo.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The controversial system combines generative adversarial networks (GANs) with self-supervised learning. For training, it used the Flickr Face HQ data set compiled last year by a team of Nvidia researchers. The same data set was used to create the StyleGAN model. It seemed to work fine on White people, but one observer input a depixelated photo of President Obama, and PULSE produced a photo of a White man. Other generated images gave Samuel L. Jackson blonde hair, turned Muhammad Ali into a White man, and assigned White features to Asian women.

In response to a colleague calling the Obama photo an example of the dangers of AI bias, LeCun asserted that “ML systems are biased when data is biased.” Analysis of a portion of the data set found far more White women and men than Black women, but people quickly took issue with the assertion that bias is about data alone. Gebru then suggested LeCun watch her tutorial — whose central message is that AI bias cannot be reduced to data alone — or explore the work of other experts who have said the same.

In her tutorial, Gebru maintains any evaluation of whether an AI model is fair must take into consideration more than just data, and she challenged the computer vision community to “understand just how pervasively our technology is being used to marginalize many groups of people.”

“I think my take-home message here is fairness is not just about data sets, and it’s not just about math. Fairness is about society as well, and as engineers, as scientists, we can’t really shy away from that fact,” Gebru said in the tutorial.

There’s no shortage of resources explaining why bias extends beyond data. As Gebru was quick to point out, LeCun is president of the ICLR conference, where earlier this year Princeton professor and sociologist Ruha Benjamin asserted in a keynote address that “computational depth without historic or sociological depth is superficial learning.”

Debate waged on Twitter until Monday, when LeCun shared a 17-tweet thread about bias in which he said he didn’t intend to claim ML systems are biased due to data alone, but that in the case of PULSE the bias comes from the data. LeCun finished the thread by suggesting Gebru avoid getting emotional in her response — a comment many female AI researchers interpreted as sexist.

Many Black researchers and women of color in the Twitter conversation expressed disappointment and frustration at LeCun’s position. UC Berkeley Ph.D. student Devin Guillory, who published a paper this week about how AI researchers can combat anti-Blackness in the AI community, accused LeCun of “gaslighting Black women and dismissing tons of scholarly work.” Other prominent AI researchers made similar accusations.

Gaslighting is defined as an act of psychological manipulation to make someone question their sanity. Gaslighting Black female researchers is especially cruel, given how many female researchers describe colleagues who fail to cite their work as part of the erasure phenomenon.

Gebru wasn’t the only Google AI leader to confront LeCun this week. Google AI researcher and CIFAR AI chair Nicolas Le Roux suggested LeCun listen to criticism, especially when it’s coming from a person representing a marginalized community. He also urged LeCun not to engage in tone policing and other tactics associated with maintaining the balance of power. Google AI chief Jeff Dean also urged people to recognize that bias goes beyond data.

Rather than taking Le Roux’s advice, LeCun responded to his criticism on Thursday with a Facebook post championing the opinions of an anonymous Twitter user who says social justice movements will take away people’s ability to engage in constructive discourse.

Later in the day, LeCun tweeted that he admires Gebru’s work and hopes they can work together to fight bias. Facebook VP of AI Jerome Pesenti also apologized for how the conversation had escalated and said it’s important to listen to the experiences of people who have experienced racial injustice. At no time in the series of posts did LeCun appear to engage with Gebru’s research.

All of this comes as Facebook is days away from facing an economic boycott over its willingness to profit from hate. The boycott’s growing list of supporters ranges from the NAACP to Patagonia. On Thursday, Verizon agreed to pull advertising from Facebook, and on Friday Unilever halted ad sales for Facebook, Instagram, and Twitter. Shortly thereafter, CEO Mark Zuckerberg announced Facebook will no longer run political ads that assert people from a specific race, gender, or other group are a threat to people’s safety or survival.

Former Black Facebook employees have complained about mistreatment at the company, and Facebook drew widespread criticism for keeping up a Trump post that Twitter labeled as glorifying violence and observers called a racist dog whistle. A Wall Street Journal report last month claimed Facebook executives had been notified that the platform’s recommendation algorithms are divisive and stoke hatred but chose not to address the issue, in part due to fear of a conservative backlash. Even employees at the Chan-Zuckerberg Initiative cited diversity issues and said the nonprofit needs to decide which side of history it wants to be on and change how it deals with race.

What’s noticeably missing from LeCun’s assessment of AI bias and Pesenti’s apology Thursday is the critical role of hiring and building diverse teams. LeCun’s comments come a little over a week after Facebook CTO Mike Schroepfer told VentureBeat that AI bias is generally the result of biased data. He went on to champion diversity as a way to mitigate bias but could not offer evidence of diverse hiring practices at Facebook AI Research (FAIR), which LeCun founded. Facebook collects and publicly reports some diversity statistics but does not measure diversity at FAIR. A Facebook AI spokesperson told VentureBeat all employees are required to participate in training to identify personal bias.

It’s unsettling to see someone with as much privilege as LeCun attempt to argue technical matters while ignoring the work of a Black colleague at a time when issues of racial inequality have sparked protests of historic size around the world, protests that are still happening.

Maybe Yann LeCun needs better friends. Maybe he should step away from the keyboard, or maybe, as LeCun argued, that first tweet omitted bias beyond data due to the sort of brevity common on Twitter. But it’s worth remembering that LeCun built FAIR in 2013, and one analysis last year found it has no Black employees.

This story isn’t over. Analysis and opinions about the exchange between Gebru and LeCun may percolate within the wider AI community for a while, and Pesenti promises Facebook AI will change. But the series of events and related news suggests a systemic problem. If FAIR valued diversity or Facebook had a more diverse group of employees or made listening to marginalized communities a priority, maybe none of this would have happened. Or it wouldn’t have taken nearly a week for Facebook executives to intervene and apologize.

In an article published last month, days before the death of George Floyd, I wrote that there’s a battle happening now for the soul of machine learning and that part of this work involves building pluralistic teams.

Yann LeCun is one of the most powerful figures in the AI community today. He wouldn’t be a Turing Award winner or neural network pioneer if he couldn’t grasp complicated subjects, but this prolonged debate against a backdrop of people in the streets demanding equal rights comes off as sort of juvenile. You can describe the Gebru-LeCun episode as sad and unfortunate and a range of other adjectives, but two things stick with me: 1) AI researchers — many of them Black or women — shouldn’t have to dedicate time to convincing LeCun of established facts, and 2) this was a missed opportunity for a leader to demonstrate leadership.

In his apology to Gebru on Thursday, Pesenti said Facebook will embrace change and education. No specifics were offered, but let’s hope this goes beyond words to include meaningful action.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Updated at 12:25 p.m. to include changes to Facebook’s political advertising policy.

Updated at 11:45 a.m. to include Facebook AI’s response to a question about bias training at Facebook.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.