Ensuring that artificial intelligence is ethical? That’s everyone’s responsibility

Opinion: The history of the AI that will be foundational to our lives is being written right now—and we need to ensure it’s a human, humane story

Foteini Agrafioti
Content image
(Shutterstock)

Foteini Agrafioti is the chief science officer of the Royal Bank of Canada and the head of Borealis AI.

At the Artificial Intelligence, Ethics, and Society conference in New Orleans last month, machine learning researchers presented a program that uses neural networks to classify potential gang activity from within a database of criminal behaviour. The algorithm was designed in the hopes of helping authorities automate the process of identifying gang-related crimes, giving police a sense of whether there may be any retaliatory activity after a crime. This advance knowledge would ideally provide them with a window of opportunity to curb further violence.

These predictive measures are meant to provide intended outcomes. But the tool’s creators surely didn’t intend the outcome that did emerge at the conference: swift backlash. Critics suggested the algorithm’s potential to erroneously label individuals as gang members could ruin lives and contribute to a growing sense of mistrust against police. When grilled over these potential misuses, the Harvard University computer scientist who presented the work waved the question off, saying: “I’m just an engineer.”

Statements like this from within the machine learning community reveal a divide over ethical responsibility. On one side are researchers who see their work as a function of pure scientific inquiry that should be allowed to advance without interference; on the other side are those who loudly demand that the scientists and companies building today’s AI technologies have an obligation to consider the broad and long-term impact of their work. Most fall somewhere along this spectrum.

READ: Artificial intelligence is the future—but it’s not immune to human bias

Out in the real world, however, there’s only one side that matters. Machine learning, at its best, sets out to improve quality of life by eradicating the errors that can lead to human rights abuses and systemic failures. AI technology already shows great promise in areas like healthcare, where machine learning algorithms are making headway toward improving patient diagnostics and curing genetic diseases.

But it just takes a few spins around the reality block to recognize that humans are imperfect, subjective and—in far too many cases—prone to corruption. In other words, if someone builds a system, someone else will find a way to fleece that system. So if dubious actors have already found their way around spaces with established laws and protocols, the gaps in AI understandability present a rare window of opportunity for those who would abuse the technology for malicious or self-serving purposes.

And the stakes are getting higher; now that AI-based algorithms are getting smarter and being used to drive decision-making in areas like recidivism, hiring, and public policy, the unintended consequences of flawed datasets and algorithms are inevitably piling up. Dr. Latanya Sweeney, a Harvard professor-in-residence and the former chief technologist of the U.S. Federal Trade Commission, once had to pay a company to show a potential employer she didn’t, in fact, have a criminal record after a Google search of her name turned up targeted ads suggesting she’d been arrested. When she dug into the matter, she discovered 80 per cent of what are considered “black” sounding names in the U.S. returned a similarly targeted arrest ad, while “neutral” (read: white) sounding names turned up completely different results.

Research into the use of computer programs to predict the probability of criminal recidivism has shown that, instead of being more accurate, the algorithms currently being used as admissible testimony in courtrooms are perpetuating the human prejudices that have upheld systemic inequalities within the U.S. penal system. And a recent study found that social media-driven job postings for high paying executive roles were deliberately targeted toward men while significantly limiting the number of women who saw the ad.

In response, organizations that study ethical AI are cropping up in top universities, and governments have promised to take the subject seriously. Conferences like the Neural Information Processing Systems (NIPS)—seen as the Super Bowl of AI conferences—are tackling the issue with new research; at last year’s NIPS in Long Beach, California, Dr. Kate Crawford, a leading researcher in data bias, fairness, and algorithmic accountability, delivered a scorching keynote on “The Trouble With Bias” that received some of the highest praise and attention of the event. Building on that momentum, a group called Fairness, Transparency, and Accountability in Machine Learning (FATML)—whose mandate is to apply computationally rigorous methods to their queries around problems like bias, discrimination, and understandability in AI—held their inaugural conference at New York University’s Vanderbilt Hall in late February. And Montreal is even pledging to be the first city to drive the template for ethical AI. 

While this is a great start, the multiple ways in which AI technologies are already embedded into our systems requires us to act now. Fairness can’t be an afterthought; its impacts must be understood and respected by industry, the place where AI meets the market and touches human lives. Safeguards should be built right into the core of our systems in order to ensure that machine learning will not introduce bias and that it will result in explainable and justifiable actions.

There is a lot of appetite for innovation using AI, and for good reason. Predictive modelling has reached peak accuracies in several domains, creating enormous opportunities. But one cannot be blind to the risks of deploying models which may work well on one hand, but are not fully explainable on the other.

READ: Four ways Canada can own the artificial intelligence century

It is important to decouple the pursuit of fairness in AI from immediate commercial interests and to create room for sound and universal scientific solutions to be developed. To that end, fundamental research aims to provide rigorous mathematical foundations that can help us better understand how decision-making works in AI. This is a big undertaking for the research community. And if definitive solutions are not yet scientifically accessible, then one needs to consider trading accuracy for explainability. Failure to address this gap is simply not an option.

Ethics need to remain top of mind in all product and design decisions, even if that means integrating ethical training and awareness in core components of our tech education. There also needs to be a far deeper understanding at the policy level of how AI technologies work, how they interact with human societies, and how to protect the vulnerable from these inevitable impacts.

Technologists and computer scientists must consider context when interpreting results, and, as Princeton’s Arvind Narayanan noted, they need to avoid the too-common viewpoint that the algorithm is not to blame when the data is biased. Data collected in the real world, after all, will inevitably be biased; the challenge, instead, should be on how to make algorithmic systems support human values. Finally, we need to support and amplify the voices of advocates, regulators, and journalists currently shouting for rigour, transparency, and accountability.

Things are already changing. Joy Buolamwini of the MIT Media Lab and Timnit Gebru of Microsoft Research recently exposed how automated facial analysis algorithms used by major tech companies had been trained on datasets comprised of disproportionately large percentages of lighter-skinned faces. As a result, the models returned a maximum error rate of 0.8 per cent for light-skinned male faces, while the error rates for darker-skinned females shot up to 34.7 per cent and in some cases couldn’t even recognize them. At the time, these algorithms had been lauded for their technical progress. Yet studies like this show they achieved their results by effectively erasing the majority of world’s phenotypic subgroups.

Within a day of sharing their results, Buolamwini said IBM turned around an immediate fix. It took fear of public exposure for this disparity to even get acknowledged as a problem, but the study moved a mountain within hours. We have more power than we realize; the only real barrier is public passivity, and the speed at which this technology is advancing means passivity is a luxury we can ill-afford.

The work done right now in AI will be the foundation upon which future generations interact with technologies. We can’t realistically account for every misuse, but we do have the opportunity to set the technologies in a direction that minimizes harm. Let’s ensure this is the origin story future generations tell.

MORE ABOUT ARTIFICIAL INTELLIGENCE: