Article Hero Image

Writing the Code of Artificial Intelligence (AI) Ethics

Explore a conversation between two tech leaders on how to ensure that the AI revolution transforms business for the benefit of humanity.

Imagine it’s 2030 and AI technologies that seemed the stuff of science fiction have become part of the fabric of life. 

Researchers have combined machine learning with genomics to crack the source code for cancer. Autonomous vehicles course down highways and even above our heads. U.S. Supreme Court clerks include robots that interact with the cream of America’s law schools. 

It’s a future of tantalizing possibility with potential to transform quality of life and enable humans to achieve their highest potential. Yet with such breakthroughs closer to reality, the question of how to design ethical, trustworthy and explainable AI leaves the realm of the theoretical to take on a poignant urgency. 

How to minimize unfair bias in key areas such as healthcare, recruitment and education? How to design protocols that overcome the “blackbox conundrum” of decision-making buried within opaque algorithms? What can we do to ensure AI does not radically widen global inequalities? 

These questions go to the heart of research on AI that — according to the Future of Humanity Institute at The University of Oxford — must be “scalably safe or aligned with human values.”

Nidhi Srivastava and Madeleine Elish grapple with these problems every day. As Tata Consultancy Services (TCS) vice president & global head, Google Cloud Business, and head of Responsible AI, Google Cloud, respectively, they have a front-row seat on the robotic dilemmas we face — and the latest thinking on strategies to overcome them.

Here are highlights of a conversation between these leaders at global companies that are together building the future of AI-powered cloud computing.  

What excites you and worries you most about humanity’s A.I. future?

Madeleine Elish: One of the things that excites me is that we are today very seriously considering the social ramifications of the AI revolution. This is the crucial first step. It’s heartening to see a vibrant public conversation, scholarly conversation, industry conversation about how we do and do not move forward. One thing that worries me is overreliance on technology. AI is great at many things. It also needs humans for other things to make a difference in the world. For example, AI isn’t going to cure cancer. That’s going to be a combination of human intelligence, technology, clinicians and caretakers. So, we need to think about that larger socio-technological picture. I worry when people begin to think that AI is a solution in and of itself. Any technology can be used for good or harm. It’s important to think — what is the technology doing? How is it interacting with the people who use it?

Nidhi Srivastava: Whether the space is mobility, life sciences or elderly care, it’s an exciting time for the future of AI. The question then becomes, how do you manage technology so that it co-works towards societal good, versus capital gains? Like Madeleine, I’m inspired by today’s conversation around ethical AI, because it begins to peel the onion. It means we can discover and confront the problems before they become problems. Bias has always existed in society, and probably always will. Therefore, the question first becomes one of awareness. It’s only with consciousness of how bias can sneak into AI, and be amplified by AI, that we can take action to minimize it. If we don’t get this right, it can create complications from widening inequalities to entrenching stereotypes. It’s deeply positive that we are grappling today, ahead of the transformations ahead, with building the responsible AI framework.

How can we create a practical roadmap for a future of responsible AI?

NS: At some level, the CEO of the company must be responsible for ensuring that the AI is responsible. Alongside the ethical imperative, one practical reason is that it otherwise creates a huge reputational and legal risk. As with any new technology, we need to see a change in organizational structures to enable the transition. You need a C-Suite level officer who is directly responsible for achieving responsible outcomes. Whether it’s a Chief Digital Officer or Chief AI Officer, it needs to be someone empowered with a mission to make sure that technology doesn’t go haywire in terms of risks. Another key factor is the need for education and training across the organization, and also across society. There has to be a continuous feedback loop, so that the consumer of tech feels empowered to point out biases, whether the field is e-commerce or healthcare, whenever they encounter them.

ME: I completely agree that you need a top-down imperative. And you also need a bottom-up educational strategy. Changes must come from both directions. On the question of practical steps, I’d point out that while we’ve focused a lot on bias, it’s important to raise other dimensions of responsible AI that are just as important. Some of them relate to scientific excellence. Is this AI really doing what it says? Is there an evidence-based backstop when technologies promise something, then don’t deliver? Investment must be made in rigorous evaluation and testing systems for safety and performance. Another key is ensuring AI serves not just to concentrate power but empowers all kinds of people. We need some mechanism for accountability when the product isn’t working, when it’s being used unfairly or when the performance is biased. That’s a central challenge.

How can we foster explainable AI to protect society from a “blackbox conundrum”?

ME: What explainable AI means in practical terms depends on the context in which the AI is being used. For example, explainable AI for a lab technician developing vaccines will mean something different from explainable AI for a frontline clinician. Both are healthcare use cases, but in each case explainable AI needs to achieve something different. We therefore must first think about, ‘who is using this technology?’ What do they need to understand? Just conveying a dashboard of information won’t achieve the results we want with explainable AI. Ultimately, explainable AI is not necessarily about what the development team originally thought, but rather what the end user needs to know. 

NS: One of the positives I’m seeing is more cloud native development of AI/ML applications, baking better explainability into the algorithms. That means less hand-coding that sometimes brings about the opacity about how the solution was built, or how decisions are made. There are many AI/ML tools available on different public clouds. And they’re taking AI out of the realm of labs, something that requires a high level of specialization, making it something where people with a bit of training can create useful applications, and put them on the marketplace. Democratization of AI through the availability of these cloud tools is moving the needle on explainability.

How can we ensure that AI is spread equitably around the world?

NS: It’s very important that everybody in the world has access to a smartphone. At a key level, this democratizes access to ideas, to innovation, to applications, to healthcare and banking. A top priority is to deliver smartphone technology at a very affordable price point around the world. The second key is democratizing access to developer platforms. We’re entering an era of low-code, no-code application systems where you don’t have to do much coding. As these platforms pick up over the next couple of years, they will combine with smartphone penetration to bridge today’s divide between the digital haves and have-nots. Once we have that innovation at global scale, we will begin to see real human problem-solving across the spectrum of society. All these opportunities are underpinned by cloud. At TCS, we help businesses imagine, plan, navigate and realize what’s possible with cloud. We believe your cloud journey should be as unique as you are.

ME: One fundamental principle is to think about who’s in the room, and whose expertise counts as expertise. How we think about the knowledge a factory worker brings, alongside a data scientist, when designing a manufacturing solution. There are different ways to think about expertise. And there are different types of lived experience, which must count as expertise in the development cycle. When building a healthcare solution, for example, consider bringing a patient advocate into the process. This will lead to more diversity in demographic characteristics, but also critically in how you think about problems and solutions.

Custom Content from WSJ is a unit of The Wall Street Journal Advertising Department. The Wall Street Journal news organization was not involved in the creation of this content.

close
audio hero image
00:00
/
close
audio hero image
00:00
/
close
audio hero image
00:00
/
close
audio hero image
00:00
/
close
audio hero image
00:00
/
close
audio hero image
00:00
/
close
audio hero image
00:00
/