BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Is AI Really Biased Against Women?

Following
This article is more than 4 years old.

Photo by Gerd Altmann for Pixabay

“Decades and generations of fighting for equality and against discrimination can be almost erased by one line of code.” Miriam Vogel on Green Connections Radio

Have you ever noticed that Siri, Alexa and other digital assistants’ voices are female? And that the names of the sophisticated problem-solving artificial intelligence systems like IBM Watson and Microsoft’s Einstein are named after men? That’s what a United Nations study found recently.

Artificial intelligence – or AI – has become so ubiquitous now it’s almost taken for granted as a magic tool.  It can seem magical when it synthesizes reams of medical data to identify treatments for one disease that had only been considered for much different ones.

Yet, AI is only as good as the data it’s “taught” and programmed with, and those come from humans. All humans have biases, so all that data and programming could be biased too as a result of who is programming it.

But it seems that those doing the programming are overwhelmingly white and male, according to the National Center for Women in Technologies, only 13.5% of machine learning jobs are held by women, only 18% of software developers are women and only 21% of computer programmers are women.

Screen shot, bbc.com

Large companies cannot screen all the resumes they receive for all the hundreds and even thousands of jobs they post, so they use screening systems that are programmed. Some of those hiring systems have been programmed to screen for the types of people who have been successful in those jobs before.

That may make sense on the surface, but the problem is, if a company like Apple or Amazon or Google has historically had almost entirely white male Millennials in those jobs, then the recruiting software will screen out females, and people that it perceives are of color and/or non-Millennials, regardless of their qualifications for the job.

What to do?

Some companies are realizing that their recruiting software is doing this and reprogramming it. Some organizations that use public data are realizing that, as Vogel put it, “public data sets are largely male and Caucasian,” and therefore, inherently going to give biased outcomes, so they “can’t rely on it.” Yet, many of these public data sets have an out-sized impact on our lives, such as being data sets many of our doctors use to diagnose our health challenge by plugging in our symptoms. However, if that public data is not accounting for the nuances of gender, race, age and fitness level, it will miss a lot.

A new nonprofit called EqualAI, funded in part by Arianna Huffington and the founder of Wikipedia, Jimmy Wales, is tackling this problem. I sat down with Equal-AI’s inaugural Executive Director, Miriam Vogel, to talk about the challenges of biases in AI and what can be done to minimize them.

“There’s no easy way to solve this problem…There are a variety of ways to address the problem and we probably need to do all of them, because it’s so pervasive and because it’s this black box that we don’t know why it’s prioritizing the answers that it is,” Vogel explained.

She continued, “It’s incumbent upon us to make sure the data it’s getting are as complex and diverse as possible, and then testing it in all different formats to make sure we haven’t missed something along the way.”

One solution Vogel highlighted is to have more women and people of color in the artificial intelligence industry, and more of them programming these systems in the first place, which is something she says EqualAI is being formed to tackle.

“The broader your base of people developing the product,” Vogel said, “the more you ensure that it’s a workable product and that it’s more globally applicable.”

Follow me on Twitter or LinkedInCheck out my website