Advertisement

SKIP ADVERTISEMENT

How to Build Artificial Intelligence We Can Trust

Computer systems need to understand time, space and causality. Right now they don’t.

Credit...Claire Merchlinsky

Gary Marcus and

Dr. Marcus is cognitive psychologist and robotics entrepreneur. Dr. Davis is a computer scientist.

Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn’t yet earned our confidence.

Tesla cars driving in Autopilot mode, for example, have a troubling history of crashing into stopped vehicles. Amazon’s facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women. Every month new weaknesses in A.I. are uncovered.

The problem is not that today’s A.I. needs to get better at what it does. The problem is that today’s A.I. needs to try to do something completely different.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality.

Today’s A.I. systems know surprisingly little about any of these concepts. Take the idea of time. We recently searched on Google for “Did George Washington own a computer?” — a query whose answer requires relating two basic facts (when Washington lived, when the computer was invented) in a single temporal framework. None of Google’s first 10 search results gave the correct answer. The results didn’t even really address the question. The highest-ranked link was to a news story in The Guardian about a computerized portrait of Martha Washington as she might have looked as a young woman.

Google’s Talk to Books, an A.I. venture that aims to answer your questions by providing relevant passages from a huge database of texts, did no better. It served up 20 passages with a wide array of facts, some about George Washington, others about the invention of computers, but with no meaningful connection between the two.

The situation is even worse when it comes to A.I. and the concepts of space and causality. Even a young child, encountering a cheese grater for the first time, can figure out why it has holes with sharp edges, which parts allow cheese to drop through, which parts you grasp with your fingers and so on. But no existing A.I. can properly understand how the shape of an object is related to its function. Machines can identify what things are, but not how something’s physical features correspond to its potential causal effects.

For certain A.I. tasks, the dominant data-correlation approach works fine. You can easily train a deep-learning machine to, say, identify pictures of Siamese cats and pictures of Derek Jeter, and to discriminate between the two. This is why such programs are good for automatic photo tagging. But they don’t have the conceptual depth to realize, for instance, that there are lots of different Siamese cats but only one Derek Jeter and that therefore a picture that shows two Siamese cats is unremarkable, whereas a picture that shows two Derek Jeters has been doctored.

In no small part, this failure of comprehension is why general-purpose robots like the housekeeper Rosie in “The Jetsons” remain a fantasy. If Rosie can’t understand the basics of how the world works, we can’t trust her in our home.

Without the concepts of time, space and causality, much of common sense is impossible. We all know, for example, that any given animal’s life begins with its birth and ends with its death; that at every moment during its life it occupies some particular region in space; that two animals can’t ordinarily be in the same space at the same time; that two animals can be in the same space at different times; and so on.

We don’t have to be taught this kind of knowledge explicitly. It is the set of background assumptions, the conceptual framework, that makes possible all our other thinking about the world.

Yet few people working in A.I. are even trying to build such background assumptions into their machines. We’re not saying that doing so is easy — on the contrary, it’s a significant theoretical and practical challenge — but we’re not going to get sophisticated computer intelligence without it.

If we build machines equipped with rich conceptual understanding, some other worries will go away. The philosopher Nick Bostrom, for example, has imagined a scenario in which a powerful A.I. machine instructed to make paper clips doesn’t know when to stop and eventually turns the whole world — people included — into paper clips.

In our view, this kind of dystopian speculation arises in large part from thinking about today’s mindless A.I. systems and extrapolating from them. If all you can calculate is statistical correlation, you can’t conceptualize harm. But A.I. systems that know about time, space and causality are the kinds of things that can be programmed to follow more general instructions, such as “A robot may not injure a human being or, through inaction, allow a human being to come to harm” (the first of Isaac Asimov’s three laws of robotics).

We face a choice. We can stick with today’s approach to A.I. and greatly restrict what the machines are allowed to do (lest we end up with autonomous-vehicle crashes and machines that perpetuate bias rather than reduce it). Or we can shift our approach to A.I. in the hope of developing machines that have a rich enough conceptual understanding of the world that we need not fear their operation. Anything else would be too risky.

Gary Marcus (@GaryMarcus), the founder and chief executive of Robust AI, and Ernest Davis, a professor of computer science at New York University, are the authors of the forthcoming book “Rebooting AI: Building Artificial Intelligence We Can Trust,” from which this essay is adapted.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

A version of this article appears in print on  , Section A, Page 23 of the New York edition with the headline: Build A.I. We Can Trust. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT