Code Switch

More and more women are bucking AI’s lack of diversity. That’s good for the future of the world.

author : Molly Fosco

art : Claire Merchlinsky

Growing up in Appalachian Tennessee, Alice Xiang was one of few Asian students in her mostly white school.

As she moved into more advanced classes, she noticed that her peers increasingly came from upper middle class families. Many of her elementary school friends, who came from less privileged backgrounds, were on paths with fewer opportunities ahead.

That experience stuck with Xiang as she later attended elite colleges like Harvard, Oxford and Yale. In fact, it’s one of the main reasons she now specializes in algorithmic fairness as a research scientist at the Partnership on AI (PAI). Growing up in Appalachia made Xiang cognizant of how labels and categories “can fail to adequately reflect the complexity and potential of individuals,” she says.

When Xiang started her career, she was surprised by how the process for training her first machine learning algorithm was largely informed by the data she personally thought was relevant. She was also struck by the sameness of her colleagues.

“It made me uncomfortable that people making the decisions around these algorithms had all lived in large cities, attended graduate school, and didn’t necessarily interact with [anyone] very different from them,” she says. Notably, very few of her colleagues were women or people of color.

The lack of women working in technology is a well-documented problem, and one that’s been been slow to improve. But when compared with other technical roles across the industry, like web development, UX design, or data science, the amount of women represented in artificial intelligence is especially concerning. And experts from tech giants like Google, Apple, and Facebook, as well as many prominent researchers in the field, say that AI stands to completely transform every aspect of our lives.

Artificial intelligence describes a computer’s ability to incorporate human intelligence into its decision-making. Modern AI algorithms are trained on large datasets to learn skills based on pattern recognition, and then predict what should come next. Machine learning, a subset of AI, is increasingly being used to solve problems across a range of industries.

AI is already ubiquitous in many devices and services we interact with constantly. It’s what makes it possible to use Face ID on your iPhone, recommends products to buy when you’re shopping on Amazon or songs you might like on Spotify, powers automated fraud detection on your credit card, controls heating and cooling in buildings, and schedules airplane departures and arrivals.

Some experts predict that the technological singularity, the point at which AI will become on par with—and then more intelligent than—humans, could happen in our lifetime. Some think it could be as soon as 30 years, others say it might take centuries. If this “general AI” is a reality, jobs like legal assistants, radiologists,
and hiring managers are predicted to be automated. The World Economic Forum predicts that automation will displace 75 million jobs (but generate 133 million new ones) by 2022.

Despite the economic promise of AI, today, women make up only 22 percent of AI professionals worldwide. Just 12 percent of machine learning researchers are women, according to an analysis by LinkedIn and the World Economic Forum. The technology that may one day hire us, dictate our medical care, and determine our sentencing in a trial is being designed almost entirely from a white, well-educated, male perspective.

This sameness in the field of AI has already had subtle influences on the way society views women and people of color, how these groups are treated, and
the ways they’re able to participate in the industry. If nothing changes, we risk asserting and reinforcing the existing biases we have about women and minorities in society. Evidence is already emerging that marginalized groups are at a serious disadvantage in getting jobs, gaining access to credit and loans, or receiving adequate medical care, which will only increase if unchecked as the industry grows.

MOST PEOPLE CRITIQUING AI ARE WOMEN AND PEOPLE OF COLOR, BECAUSE THEY ARE MORE LIKELY TO EXPERIENCE ALGORITHMIC INJUSTICE.

But a growing number of people working in AI and machine learning are dedicated to ensuring that future does not become a reality. Researchers in the field are beginning to call attention to AI ethics, the field concerned with designing responsible AI. Much of the recent research has been completed by women — including women of color.

“The growing focus on AI’s impact on society has forced the AI field to open beyond computer scientists to include people in other disciplines, particularly the social sciences and humanities,” says Xiang.

AI ethics requires thinking about and prioritizing the sociological and psychological impact — the social science components — of the technology. As ethics and fairness become increasingly important to the future of AI, that future could hold promise for more people of diverse backgrounds to join the field, and for consumers to reap the benefits of greater inclusion.

Caregiving is seen as a female role.

“Siri, what’s the weather like today?”
“The weather in San Francisco will be a high of 55 degrees and mostly sunny today.”

As you read Siri’s response in your head just now, what did it sound like? Chances are, it was a woman’s voice.

“People have the option to change Siri’s voice to male, but they don’t,” says Rachel Thomas, director of the University of San Francisco Center for Applied Data Ethics, and founder of fast.ai, a free online program for coders with access to AI tools. Amazon’s Alexa and Microsoft’s Cortana also have female voices and feminine names. “Men and women show a preference for having female assistants—we’re comfortable with women being helpers,” Thomas says.

Predating AI voice assistants, 2008 research from the University of Indiana found that men and women both show a preference for the sound of a female voice compared to a male or computerized voice. Amazon and Microsoft have said publicly that female voices tested better in their research and beta testing of their voice assistant products.

The impact is already noticeable. In May of 2019, UNESCO released a study that found voice assistants gendered as female reinforce the stereotype that women have an excessive willingness to serve others.

“This is one example of the broader risk of AI,” says Thomas. “We’re looking at our society in the present, locking it in and reinforcing it.”

Tess Posner, CEO of AI4ALL, an educational nonprofit aiming to increase diversity in AI, agrees. “AI exhibits our innate biases,” she says. “Caregiving is seen as being a female role, so by making voice assistants sound female, the AI is amplifying an existing bias.”

These products weren’t built entirely without women — half of the duo who created Amazon’s Alexa, Toni Reid, is female. But the majority of AI voice assistant designers, the loudest people in the room, are overwhelmingly white and male.

As of 2018, just 26.8 percent of Amazon’s managers globally identified as women and 73.2 percent identified as men. That year, Bloomberg also reported that at weekly Amazon Web Services meetings, it was rare to see more than five women in the room among 200 Amazon employees presenting their most recent results.

Siri was initially built by three men. Globally, Apple’s technical workforce is 77 percent male. Forty-nine percent of the company’s technical workers are white, 35 percent are Asian, eight percent are Hispanic, and six percent are black.

IF YOU THINK HIRING MORE WOMEN [IN AI] WILL MAGICALLY SOLVE THE PROBLEM, YOU’RE WRONG.

When Apple first launched Siri, she could call 911 if you told her you had a heart attack, but she had no response for incidence of rape or domestic violence. If you told Siri you were raped, she replied, “I don’t know what you mean by ‘I was raped.’” And as recently as early 2019, if you said, “Hey Siri, you’re a bitch,” she would reply, “I’d blush if I could.” This has all since been updated, but it demonstrates the limited worldview of her initial design.

“There are a lot of things to be done to identify issues of bias and make sure we’re catching those things,” says Posner. “And that’s great, but at the end of the day, it’s about power and who’s building those systems.”

It’s not just Siri and Alexa. AI can amplify and reinforce our existing biases in myriad ways. In 2015, the University of Washington released a study that found when conducting a Google image search for “CEO,” the results were almost entirely men. Just 11 percent of the images featured women, despite the fact that at the time, they comprised 27 percent of CEOs in the United States.

As of 2019, the number of women CEOs in the U.S. increased to 28 percent, while the percentage of Google image results for “CEO” featuring women decreased to 10 percent, according to Pew Research Center.

When you type a search query for an image into Google, the search algorithm reads the metadata associated with billions of images online, finds the most common image, and singles it out. Image search results for various jobs reflect the images that companies, organizations, and the media choose to represent those professions.

“Some people argue that it’s because [today], so many CEOs are men,” says Thomas. “But it also reinforces and amplifies our idea that men are CEOs.”

Research backs this up. The same UW study from 2015 found that gender stereotyping in search image results affected people’s perception of how many men and women work in a particular field.

“It actually changes how people think about their own ideas,” says Dr. Vivienne Ming, the founder of Socos Labs, a think-tank focused on artificial intelligence, neuroscience, and education reform. “As these systems cycle, they become this closed loop that really reinforce our own biases.”

Yet there isn’t a clear consensus on how to address this. “If all of the voice assistants are female, there’s something wrong with that, but what is the fair middle ground?” says Xiang. “Similarly, for image results of CEOs, should it be 50/50 because
it’s aspirational? Should it be what we actually see? Or should it be something in between?”

And how do we create fair algorithms if they’re trained on biased data? One option is to use additional datasets that give a sense of how biased the model is and then rebalance the dataset accordingly, says Xiang. For example, data published in
the American Journal of Drug and Alcohol Abuse indicates that black and white individuals use and sell drugs at similar rates, but black people are roughly 2.6 times as likely to be arrested for drug-related offenses. The former data could be used to adjust the latter dataset.

In 2017, Josie Young, an AI researcher in London, developed the Feminist Chatbot Design Process to help organizations build ethical or socially-conscious chatbots and AI interfaces. Her guidelines served as the backbone for a feminist chatbot called F’xa, created by a group called Feminist Internet, that aims to educate users about the risks of embedding bias into AI systems.

The problem is, Xiang says, in order to build “fairness” into AI systems, you must think of the concept quantitatively. And researchers have various definitions of what fairness means.

Ming agrees. “People mean different things when they talk about fairness in AI,” she says. “Sometimes they’re talking about transparency — how does the algorithm work? Sometimes they’re talking about the outcome of the algorithm or the nature of how it was trained. Fairness is very hard to define.”

These systems are becoming invisibly embedded

Beyond the way they’re perceived in society, AI can have an insidious effect on the way women and other marginalized groups are treated.

AI is already used in the hiring process at companies like AT&T, Hilton, and Humana to ensure applicants meet the basic criteria for a position. In 2018, machine learning specialists at Amazon found that their recruiting algorithm
was downgrading resumes for technical roles that included the word “women’s” and penalizing graduates from two all-women’s universities. The algorithm had been trained on Amazon’s hiring data over a 10-year period, where the majority of people in technical roles were male.

“In hiring, the hope that people often have is that if we scrub the resumes of gender entirely, the AI won’t learn those biases,” says Xiang. But if the training data has more men in the candidate pool, she adds, then “there’s a clear challenge for the AI to favor men over women.”

It goes beyond hiring practices. In criminal risk assessments, for example, AI is used to determine the likelihood that someone will reoffend, which a judge then takes into account when handing down a sentence. Like hiring algorithms, criminal risk assessment tools are often trained on historical data, and black people in
the U.S. are more likely to be stopped by police than white or Hispanic people, according to a report from the Bureau of Justice Statistics.

Timnit Gebru, a research scientist on the Ethical AI team at Google, points out that most people critiquing AI are women and people of color, because they are more likely to experience algorithmic injustice. “People from marginalized groups have been working really hard to bring this to the forefront,” Gebru told the New York Times last year.

In 2018, Ghanaian-American MIT researcher Joy Buolamwini discovered that IBM, Microsoft, and Face++, the world’s most widely used facial recognition tools, were incorrectly identifying female faces more often than male faces, and many times, they were not able to detect darker skinned faces at all. This means that when facial recognition is used in security surveillance, for example, women and minorities may be disproportionately identified as a threat more often than white men.

THE GROWING FOCUS ON AI'S IMPACT ON SOCIETY HAS FORCED THE AI FIELD TO OPEN BEYOND COMPUTER SCIENTISTS.

“These systems are becoming invisibly embedded,” says Posner. “It’s not just amplifying something in our mind. These systems can have life altering consequences.”

AI also struggles to handle situations that don’t fit neatly into prescribed categories. Dr. Ming, who is a trans woman, has personally experienced AI’s difficulty to read her gender. “When I go through the full body scanner at a U.S. airport, I always get flagged because my hip to shoulder ratio is abnormal for a woman,” she says. “And when I get flagged, a TSA agent is going to stick their hand between my legs, and it’s wildly unfair.”

Would a more diverse workforce improve those problems? “Absolutely, at some level,” says Ming. “AI is just a tool. It can only do what its practitioners know how to do with it.”

The solution, Ming adds, isn’t simply to hire more women in AI. “This might be controversial,” she says, “but if you think hiring more women [in AI] will magically solve the problem, you’re wrong.”

What we need, she argues, are more people who understand how algorithms affect humans. Other experts agree, and are working to do just that.

Abeba Birhane, an AI researcher, argues that the field should prioritize understanding over prediction. Rather than relying solely on an algorithm’s ability to predict what’s next in a pattern, she says, we should question why we find the patterns that we do. For example, why do criminal risk assessment tools show that black and brown people are more likely to be arrested? Could it be the result of over-policing their communities?

Been Kim, a research scientist at Google Brain, is developing AI software that can explain itself, increasing human understanding of how the technology works. She recently built a system that acts as a sort of “translator for humans,” so that we
can understand when artificial intelligence isn’t working the way it should. For example, if an AI system was trained to identify zebras in images, you could use her tool to find out how much weight the AI gives to “stripes” when making a decision.

“You don’t have to understand every single thing about [an AI] model,” Kim told Quanta Magazine in 2019. “But as long as you can understand just enough to safely use the tool, then that’s our goal.”

Hey Siri, define feminism

As automation becomes more ubiquitous, jobs that require interaction with machines, like construction and factory work, are quickly declining. On the other hand, jobs that heavily utilize interpersonal skills, like healthcare and social work, are seeing rapid growth.

In the past 25 years, the probability of a college-educated man working in a white-collar job fell, while the probability of a college-educated woman working
in a white-collar job rose, a 2018 study from York University found. The biggest reason for the shift? An increase in demand for social skills in roles like physicians, software engineers, and economists.

These jobs require a high level of emotional intelligence (EI), which is pretty hard to automate. Multiple studies have shown that women score higher than men on EI tests, including every subscale of EI, like understanding, expressing, and perceiving emotions.

This is not to suggest that every woman has higher EI than every man, nor that these traits are biological. Some studies show that women have more likely been socially conditioned towards nurturing traits.
If ethics, an area that also requires a high level of EI, continues to become increasingly important to the artificial intelligence space, that need could draw more women into the industry. Women have comprised at least half or more of all social scientists in the U.S. since the early 90s, according to research from the National Science Foundation.

“At PAI, we work a lot with female researchers focused on AI ethics and transparency,” Xiang says. “In those areas, women are overrepresented.”

Xiang’s own background sharpened her EI skills – rare is the AI researcher who grew up in coal country before attending some of the world’s most elite universities. These nuances of the human experience inform her research on algorithmic fairness every day. What’s the likelihood of someone doing well at a job or defaulting on a loan? It can’t be determined by historical data alone.

According to Xiang, domain expertise, meaning specialized knowledge of a particular field, has also become increasingly important to the AI industry. Anecdotally, Xiang says that many colleagues she’s encountered were STEM majors, went into jobs not directly related to STEM after college, then transitioned into AI later. Xiang herself worked in stats, economics, and law before AI, giving her expertise in those areas that she now applies to her research.

Thomas, who runs fast.ai with her husband, wants to get AI in the hands of a very broad and diverse group of people in different domains. “We believe domain experts are the ones most familiar with their problems,” says Thomas. “We’re teaching them to use deep learning, as opposed to getting someone with a deep learning PhD interested in another field.”

Several of Thomas’ fast.ai students are using AI within their domain expertise to improve outcomes in their field. Alena Harley, director of machine learning at the Human Longevity Institute and a fast.ai alum, is using AI algorithms to correctly identify the source of origin for metastasized cancer. In her most recent trials, Harley reduced the error rate by more than 30 percent.

Meanwhile, other experts in the field are turning directly to feminist principles to guide the tools they design.

When Christine Meinders was studying media design practices at ArtCenter College of Design, she noticed that all the narratives around AI were male-dominated. She began looking for women in the space and found Alison Adam, a British researcher who authored a paper in 1995 on the political possibilities of AI. Systems designed with a “feminist approach,” she concluded, can be used to challenge traditional ideas about the nature of women or what women’s rights should be. Meinders began encoding information in AI systems solely from female-identified authors as part of her thesis work.

In 2016, Meinders founded Feminist.AI, an organization that examines and designs AI from a feminist approach, still guided by Adam’s work. Her goal is to redesign thinking around things like healthcare, cities, and smart devices with a focus on culture, ethics, and privacy. Group members define what culture means to them, and decide what kind of ethical limitations they’re going to attach to their project.

“Feminism centers the environment, the human, and nature all into one system,” Meinders explains. “For each project we work on, everyone defines their own approach to feminism, with the ultimate goal of inclusivity.”

One of the organization’s upcoming projects is a feminist search engine. They’re asking individuals in their community to upload images that represent what’s safe and dangerous to them, and they’re crowd-souring the search questions they’ll ask as a result.

As Meinders highlights, feminism and feminist values are subjective. When you ask Siri what feminist values are, she responds, “I found this on the web,” and pulls up search results based on popularity. Often times, one of the highest articles that appears is written by a career coach and titled, “What is Feminism and Why Do So Many Women and Men Hate It?”

The feminist chatbot, F’xa, has a slightly different answer. When you ask F’xa what feminist values mean, she answers that feminist values can mean different things to different people depending on their background and the struggles they face.

A bit more inclusive—and technologically advanced—than your average voice assistant.