BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Journey To Fairness In AI -- Q&A With New York Times Best Selling Author Abigail Hing Wen

Following
This article is more than 2 years old.

 

Abigail Hing Wen has a knack for trying and succeeding at new things. A lawyer by trade, Abigail worked in Washington D.C. for the Senate and as a law clerk for a federal judge before moving to Silicon Valley and charting a new course in venture capital and artificial intelligence. She currently serves on the Transparency in ML committee for the Partnership on AI, and previously as co-chair of their Expert Working Group for Fairness, Transparency, and Accountability.

A gifted storyteller, Abigail is also the New York Times bestselling author of Loveboat, Taipei, which is being adapted for film in Hollywood, and the former host of the Intel on AI podcast. She served as senior director of emerging AI tech at Intel and speaks on AI, venture capital, diversity, and leadership in international venues. 

We talked to Abigail about her journey from modest beginnings in northern Ohio to becoming a best-selling author and one of the most respected voices on fairness in AI. 

Q: Tell us about your current role at the Partnership on AI

Abigail: The mission of Partnership on AI is to bring diverse voices together across global sectors, disciplines, and demographics so developments in AI advance positive outcomes for people and society. It’s a non-profit which counts large corporations, research institutions, and civil society organizations among its Partners. The Partnership serves an essential independent role in several areas, including AI Fairness, Transparency, and Accountability, where I served as co-chair of the expert working group and as a steering committee member.  I currently serve on the ABOUT in ML (a documentation in ML project) steering committee.

Q: What are some of the things the AI Fairness, Transparency, and Accountability working group is focused on?

Abigail: The committee encompasses a large body of research and programming around algorithmic fairness, explainability, criminal justice, and diversity and inclusion. Equity and social justice are at the core of the research questions we work on. The team of researchers is highly interdisciplinary, with expertise in statistics, computer science, social sciences, and law.

For example, PAI issued a criminal justice report in 2019 and an issue brief last year on using the PATTERN risk assessment tool in the federal COVID response. In the research stage, many of the staff and Partners became concerned about the bias and accuracy issues associated with using risk assessment tools in the criminal justice system. As a result, the consensus view in the report was that these tools should not be used in the pretrial context to automate release decisions.

Q: It seems that one of the issues with ML and AI is that, because it’s so technical, there’s not much awareness about how it could potentially be harmful at the top of the organizations that are implementing it. Is this accurate? 

Abigail: Yes and no. It depends on the organization that’s using it and the product they are implementing. In our research, we find that most companies want to figure it out. 

But it’s difficult, and there’s no one right way to get it right. My personal belief is that it has to be addressed at all levels and include all stakeholders. This means not only the engineering organization but an ethics board with executive participation. In addition, guidelines should be established for teams that build on top of the models, for teams that sell the product, and for impacted people. And the organization should engage with the general public if the product is in the public domain. 

Q: So the Partnership on AI brings together some of the largest and most consequential companies regarding AI and fairness. Do you feel like there’s been progress with third-party organizations helping to address these issues industry-wide? 

Abigail: PAI is in a unique position to convene a very diverse set of stakeholders. The industry is taking steps and embracing the need to answer and address the hard questions. There have been some high-profile cases on AI and ML technology harming people. As a result, we’re seeing more and more members of the AI community on the cutting edge of thought leadership and pushing the industry towards responsible AI. 

Anecdotally, if you look at the biggest conferences in the world on AI, the number of keynote speakers and sessions devoted to ethical issues has increased to almost ⅓ of the tracks in recent years. 

They are speaking to large audiences of AI programmers and leaders, which is important. You find that everyone working on a product or in research has to be responsible for AI and ethics because it’s shaped from the ground up when a product is designed. 

Q: Who are the companies that are doing AI fairness well? 

Abigail: As you note, the Partnership on AI includes a number of the largest tech companies using AI, including founding Partners such as Amazon, Facebook, Google and Apple, and I think, for the most part, they are bringing on dedicated people to look at the problem, which is encouraging. There are always outliers, and companies will have people who may not care as much, and that’s a struggle that these companies are going to face. 

The thing about AI is that it’s a multi-use technology, and it can be both beneficial and harmful. There are many companies where engineers won’t work on specific projects because they are concerned about ethical issues. The fact that people are advocating for products to be used correctly is encouraging to me. 

Q: When you look at data teams in collaboration with other organizations in the company, are there emerging best practices on how they should work together to ensure that ethics, fairness, and transparency are addressed, and the models are doing what they’re supposed to?

Abigail: It depends on the structure of the organization and the audience for the product or service, but this touches on the need for transparency. When different groups across the company can see when and where the outcomes of a model are unfair, you can make systemic improvements. On the other hand, you run into problems when no one can see the consequences, which is usually the case with systemic bias. So, there has to be a multidisciplinary approach in which ethicists and other folks help to identify and address issues of bias and fairness. 

The question also raises another important question that I have spoken about at great length: the fairness question is complicated. 

For example, there are entire professions and academic studies  devoted to figuring out how to create a fair society, and there’s no perfect answer. There are gaps in the law, and we don’t have a definition of fairness that we’re working towards. Hence, corporations need to agree on a framework of what those goals are for themselves and then get the engineering teams on board.

This would benefit the movement towards more fair and transparent AI. If teams are seeing issues, they can surface them and run them by their legal and policy departments. At the same time, leadership has the responsibility to set a more clear direction. That said, at the end of the day, no one company will solve issues related to fairness and transparency, and it’s going to take collaboration between teams and across the industry. So we’re still in a very early stage. 

Q: Right now, the prevailing approach is that models are built and deployed, and fairness issues are discovered and resolved once they are in production. What’s your take on this process?

Abigail: I think the danger is we are building systems that may be inherently unfair to different groups of people. We may not even know about it, so it’s an ongoing process. It does require a lot of coordination and communication and making the information available to understand the implications.

One significant contributor to reducing systemic and implicit bias is more diversity on the teams that are building and implementing models and defining policies. Diverse teams are more likely to identify fairness issues.

Q: Is there one area you are more concerned about with AI than others? 

Abigail: Safety and when human life is endangered is very important for me. The most obvious example is when AI is used to detect humans vs. inanimate objects on roadways and sidewalks, and the AI gets it wrong — for any reason, but also because it doesn’t work as well on underrepresented groups. If this happens in a moment a key decision needs to be made by the machine, then you have potential loss of life. 

Other issues are less pressing in the short term but create problems in the long run, such as recommendation engines. If people are descending into bubbles of information over time, what does that do for our society? We saw that with the elections. Over time, these situations can become very dangerous, and we should be sure to look at the long-term ramifications.

Q: Changing gears a bit; when you were at Intel, you were on the venture side and had a chance to work with several startups that built their businesses around AI. This raises the question: when should companies that are building products using AI/ML start thinking about fairness and ethics? 

Abigail: Some frameworks are being created to test AI models for deployment, and I wrote a piece about whether we should create a system for AI similar to the FDA approval for drug release, to govern deploying models. I don’t think it’s ever too early in a product’s development to think about ethical considerations. Still, sometimes the startup or the engineering team is simply focused on trying to build a viable product and make sure it works for the demographic they are targeting. But to the extent that we can encourage ethics by design from the ground up, that is important because it’s hard to change it when it’s been built a certain way. 

There’s sometimes a perception that it’s a zero-sum game, that you’re either devoting resources to the product or ethics, but we see that that’s not the case. Designs focused on one particular group of people, such as handicap ramps, also benefit society as a whole. So if we can build that mentality as early on as possible, society is better off. 

Even where situations are zero-sum, we must still choose to prioritize ethical design and guardrails for deployment when the model’s application has potential for severe impact on humans. We don’t believe it’s okay to test-and-learn on food products, for example, yet that is how the internet has been built. The EU AI risk model has some of these concepts built into it.

Q: Transparency is something you spend a lot of time on. What do you view as transparency’s role in encouraging organizations to be more proactive about AI ethics due to the risks of issues becoming public?

Abigail: First, from the government’s point of view, we need to build safe harbors so companies can raise significant issues without fear of being severely penalized. Creators of AI products should have incentives and space to address the problems. And employees should be incentivized to raise the issues and not hide them for fear of retaliation or retribution. 

I hope we’re moving in that direction. Europe may not lead in AI technology creation, but they do aim to lead in AI ethics because they see their strength in the business of governance. They are pushing the edges of AI policy innovation, and as a result, companies have to comply with the lowest common denominator like GDPR, which sets a bar for the rest of the world. 

AI regulations have to be done through close partnership between governments and technologists because this is very complicated, and if you don’t have the two working together, it will lead to poor outcomes. 

Q: Thanks so much for your insights. What can we expect to see next on your journey?

Abigail: I am still involved on the Partnership on AI’s steering committee for transparency as well as other advisory roles. I left my corporate role in May to focus on content projects in books and filmmaking, in large part to encourage more diverse cross-cultural pipelines in STEM, AI and leadership and to explore big questions around AI and ethics. 

My second novel, Loveboat Reunion, comes out with HarperCollins January 2022 and follows a girl in AI and machine learning who tries to bring together her interest in fashion with these hardcore technologies and navigate both worlds. As far as I know, there aren’t any novels about girls and AI, and I’m excited about telling the story of how my girl brings her full self to the table. 

I’m also working on Silicon Valley-based screenplays and film-producing projects. I’m in the early stages of developing a Girls in Tech animated series with the University of California, Berkeley, based on a comic book series produced by Deloitte called Ella the Engineer. The series is designed for young people and to make engineering more accessible to girls. 

Finally, I just finished writing a short story called The Idiom Algorithm, in a MacMillan anthology SERENDIPITY, that comes out in January 2022. My third novel, on cognitive differences, is slated for 2023 with HarperCollins. It’s an exciting time to be diving into these issues and telling stories from a different angle. 

We’d like to thank Abigail Hing Wen for taking the time to talk to us about her ever-evolving career and her work to bring ethics, fairness, and transparency in AI to the forefront; we wish her the best of luck with her future endeavors. You can learn more about Abigail at www.abigailhingwen.com

Follow on social media (Twitter, LinkedIn, Instagram, Facebook, Clubhouse): @abigailhingwen

Stay tuned for our next post in our AI Ethics Series. 

 

 

 

 

Abigail Hing WenAbigail Hing Wen
Follow me on Twitter or LinkedInCheck out my website or some of my other work here