Skip to Content

How to Root Out Hidden Biases in AI

Algorithms are making life-changing decisions like denying parole or granting loans. Cynthia Dwork, a computer scientist at Harvard, is developing ways of making sure the machines are operating fairly.
October 24, 2017
Miguel Porlan

Why is it hard for algorithm designers or data scientists to account for bias and unfairness?

Take a work environment that’s hostile to women. Suppose that you define success as the fact that someone holds a job for two to three years and gets a promotion. Then your predictor—based on the historical data—will accurately predict that it’s not a good idea to hire women. What’s interesting here is that we’re not talking about historical hiring decisions. Even if the hiring decisions were totally unbiased, the reality—the real discrimination in the hostile environment—persists. It’s deeper, more structural, more ingrained and harder to overcome.

I believe the great use for machine learning and AI will be in conjunction with really knowledgeable people who know history and sociology and psychology to figure out who should be treated similarly to whom.

I’m not saying computers will never be able to do it, but I don’t see it now. How do you know when you have the right model, and when it’s capturing what really happened in society? You need to have an understanding of what you’re talking about. There’s the famous saying “All models are wrong and some are useful.”

as told to Will Knight

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.