Technology & Innovation

Don’t be evil?

August 02, 2019

Global

August 02, 2019

Global
Michael Gold

Managing editor

Michael is a managing editor at Economist Impact. Although Michael has roots in Montreal, he grew up in Palo Alto, California and attended Yale University, where he majored in anthropology. Prior to joining the Economist Group, Michael was a correspondent for Reuters in Taipei, where he covered the technology sector. He has also worked in Beijing and is fluent in Mandarin. 

Contact

The list of ethical guidelines for AI is growing rapidly

Artificial intelligence (AI) has, of late, been the subject of so many announcements, proclamations, predictions and premonitions that it could occupy its own 24-hour cable news channel. In technology circles, it has become a kind of holy grail, akin to fire, the wheel or the steam engine in terms of world-changing potential. Whether these forecasts come to pass is still an open question. What is less in doubt are the vast ethical ramifications of AI development and use, and the need to address them before AI becomes a part of everyday life.

AI and its subset, machine learning, encompass a variety of techniques meant to enhance computing power to the point where software can act in ways that mimics or even eclipses human reasoning and deductive power. AI in one form or another has been used in an agglomeration of applications, from predicting how to allocate power sources in order to maximise the energy generated by wind, to training factory machines on the best way to assemble gadgets, to understanding a person’s voice when they talk to their digital assistant.

At first glance, few of these applications appear freighted with ethical considerations, but AI deserves scrutiny beyond simply how it will be used for three main reasons. The first is that many of the algorithms used for AI applications operate in a “black box”, meaning that they cannot explain the chain of logic used to arrive at a decision. Take, for example, an AI system being used to assess whether someone should receive a mortgage loan (a practice already being rolled out). If it recommends denying the loan based on the data analysed, it will not be able to explain why—it will simply issue its decision and move on.

Second, the data used to teach algorithms how to make decisions can be flawed or incomplete. An algorithm that is meant to predict an individual’s likelihood of developing a certain kind of cancer, for example, may have been trained on data only from a specific demographic group, so the results would not apply to a wide swath of the population, including those for whom it might be most useful.

Third, AI may simply reinforce pre-existing human biases, further exacerbating the problems that it is ostensibly meant to solve. Google, for example, which uses advanced AI algorithms to fine-tune its search results, came under fire for displaying examples predominantly from white people following a search for “hands”; a search for “black hands” yielded results that showed a variety of stereotypical images, such as hands working in the earth.

Bad robots

These are the main reasons why so much focus is now being placed on creating some widely applicable code of ethical conduct for AI development. Dragutin Petkovic, professor at San Francisco State University, which this upcoming academic year will launch one of the world’s first certificate programs in AI ethics that combines computer science, business and philosophy, believes that a greater “culture of awareness” needs to be cultivated. This culture would encompass both the programmers of the AI algorithms themselves, the data scientists gathering, handling and manipulating the inputs into the algorithms, and the ultimate deployers of the AI in the real world—often governments or businesses looking to achieve a specific aim via the use of AI technologies.

Now, this fledgling community of AI ethics pioneers includes everyone from academia—whose leading technology journals, in a perfectly ethical world, could refuse to publish papers about AI if the underlying technology isn’t developed following an ethical framework—to think-tanks, huge tech companies competing for AI dominance and multilateral bodies sitting at the apex of global governance.

For example, the OECD, a club of mostly rich countries, issued guidance in May that states that AI should be “designed in a way that respects the rule of law, human rights, democratic values and diversity”—an ostensibly Western viewpoint that was nevertheless adopted by the G20 (which includes a number of non-democratic countries, like China and Saudi Arabia) in June. Upon its founding, Partnership on AI, a consortium of firms and nonprofits that includes Amazon, Facebook and Google, issued eight “tenets”, including that AI should benefit humanity, a push for greater engagement by AI developers with the general public, and a call to lessen the black-box nature of AI. “We genuinely believe that the market will reward those actors that show accountability up and down the entire chain of AI creation and deployment,” says Samir Goswami, Partnership’s chief operating officer.

Yet given the pace of technological change and sheer number of players in this space, any enforceable, one-size-fits-all set of AI ethics seems unlikely, at least in the near future. “Many ostensible codes of conduct are out there, but few have any real teeth,” says Mr Petkovic, who notes that implementation is also difficult technically due to the opaque nature of AI algorithms. Despite the G20 proclamation, China, for one, has received international criticism for its use of AI-powered facial recognition technology in the restive province of Xinjiang. Google recently disbanded its internal AI ethics board after employees objected to anti-LGBT comments made by one of its members and the potential military application of drones made by a firm founded by another member.

These examples demonstrate the difficulty of universalising “ethics”—a highly subjective concept—for a realm as complex and far-reaching as AI. Yet this will hardly deter people from trying; Mr Goswami, for one, believes that the lessons learned from previous technological waves could serve as a kind of ethical guide for a future AI age. “We are committed to learning from the past,” he says. “We shouldn’t have to reinvent our value systems every time a new technology comes along, even one as transformative as AI.” 

Enjoy in-depth insights and expert analysis - subscribe to our Perspectives newsletter, delivered every week