AI’s desire

It’s easy to imagine an AI winning a game of Go, but can you imagine an AI wanting to play a game of Go?

By Mike Loukides
June 26, 2018
Robot love Robot love (source: Pixabay)

At the Artificial Intelligence Conference in New York, Kathryn Hume pointed me to Ellen Ullman’s excellent book, Life in Code: A Personal History of Technology. There’s a lot worth reading here, particularly Ullman’s early essays about her days as a woman working in the male-dominated field of programming.

In Part 3 of her book “Life, Artificial,” Ullman talks about artificial intelligence, robotics, and the desire to create artificial life. On our attempts to build artificial life, she writes:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

What these views of human sentience have in common, and why they fail to describe us, is a certain disdain for the body: the utter lack of a body in early AI and in later formulations like Kurzweil’s (the lonely cortex, scanned and downloaded, a brain-in-a-jar); and the disregard for this body, this mammalian flesh, in robotics and ALife [Artificial Life].

By connecting the poverty of AI with its denial of the body, Ullman follows an important thread in feminist theory: our thinking needs to be connected to bodies, to physical human process, to blood and meat. The male-dominated Western tradition is all about abstraction, for which Plato is the poster child. And abstraction is also one of the most important themes in programming. As the saying goes, “there is no problem in computing that can’t be solved by adding a layer of abstraction.” It’s ironic, I think intentionally, that the most important voice of abstraction in Ullman’s discussion of robotics is Cynthia Breazeal.

So, there’s a horror of meat, of meatspace, that’s baked into our thoughts about robotics. Ullman writes: “This suspicion of the flesh, this quest for a disembodied intelligence, persists today.” Her frustration with human conversation that had devolved into vocal email messages, from her first chapters, certainly feeds Ullman’s critique of abstraction. So much of what it means to be human is baked into the reality of meat, of fluid, of our bodies.

I’ve always wondered what you would find if you cut open one of the robots from Asimov’s Foundation books. Meat? Metallic parts? Asimov’s robots are indistinguishable from humans. P. K. Dick has a similar problem in Do Androids Dream of Electric Sheep?, though he’s much more attuned to the ironies and contradictions of robotics. Dick’s robots are clearly sentient, and they can have sex with humans (illegal, but you know…), but they also clearly have “parts” inside, some sort of electro-mechanical nervous system. If you shoot them, they go “sproing” (or something like that). You don’t really know if they’re robots until they’re dead. But once they’re dead, there’s clearly something in them that makes it obvious.

Shortly before Ullman got to bodies, I started thinking about desire, a theme Ullman picks up a chapter or two later. Desire, of course, is an important theme in critical thought. But we don’t have to go that far. What would an artificial intelligence want? And that’s connected with the body problem: what would an artificial intelligence feel?

That’s where things become difficult. We know we can create machines that play Go better than humans. I’m willing to grant that a computer will eventually be able to do just about anything better than I can. There are already programs that can write articles based on a data feed, there are programs that can play pianos, add a robotic body and we have programs that can walk to the mailbox, add some cameras and we have programs that can look at birds, flowers, and other objects.

I’ve criticized the hype about modern AI (not the AI itself) because all these tasks are currently separate. The program that plays Go can’t pick up the mail, and so on. A “general intelligence” will have to be able to solve all these problems and more. But let’s start exercising our imagination (still a uniquely human capability) and see where we can go. Can I imagine software that does everything I listed above, plus a lot more? Yes. I don’t think we’ll have it for a while, but it’s fundamentally an integration problem. Once we solve the individual pieces, assembling them into a whole should be possible.

Here’s where it gets difficult. Can I imagine that software wanting to play a game of Go? Can I imagine walking by and hearing a computer say “Hey, Mike, want a game?” Well, sure: you can take AlphaGo and connect it to an Amazon Echo, and have it ask passers-by whether they want to play a game. With a camera and computer vision, it could even identify potential opponents by name.

But does that mean the computer wants to play a game? That’s where my imagination runs into trouble. I don’t even know what that question means. How would a computer decide whether to play Go or look at flowers or listen to music? I can imagine a programmable sense of aesthetics; but I still can’t imagine the desire to do this rather than that. And can I imagine a program that says “I’d like to play piano better, so I’ll spend some time practicing”? After all, machine learning isn’t about devices that outperform humans ex nihilo; AI systems get good at what they do through training, which looks an awful lot like practice (and is even more laborious). A program could certainly detect an unacceptable failure rate, and put itself back into training mode. But would a robot want to practice just for the sake of practice?

I have the same problem with sentience. Yes, we can give an AI a body, making it a robot. We can build sensors into that body. And we can make the robot groan or cry out if it is injured. But I still can’t imagine the robot feeling pain in the way a human or an animal feels pain.

The reason I can’t imagine a robot’s desire, or its pain, has nothing to do with the capabilities, real or imagined, of our hardware or software. It has everything to do with the substrate: I can’t imagine putting that desire into silicon (or DNA, or whatever our future computers are made from). This is the point where my imagination fails. I can’t imagine making something that wants.

It’s not just that I don’t know how to; I don’t know how to write a program that plays Go, but I know that people can. There’s a point at which my ability to imagine just stops, and that’s it. The “made-ness” of the thing, the fact that I have seen the interior, makes it impossible for me to imagine desire, volition, sentience, any kind of interiority. The paradox of interiority is that it doesn’t exist if you can see the interior.

And do we care? If so, why? Consider William Carlos Williams’ short poem, “This Is Just to Say,” which briefly became a Twitter meme. I can’t reproduce it because of copyright, but in it the speaker apologizes for eating some delicious cold plums that someone was apparently saving to eat later.

Is that poem important because of the human consciousness behind it? Would it be the same if it were just a clever arrangement of words? The poem is purposely neither clever nor ornate. If a machine at DeepMind spit out two sentences like these, attached a name, and published it, would I care? And why? Does it matter that there’s an observer (and an eater) of the plums, and that this message is addressed to someone? A computer could certainly use the words “delicious” and “cold,” but do we care that it might not experience the sensuality of cold, delicious plums?

This poem is a statement by someone who needs to be forgiven to someone who forgives; it’s a statement about breakfast, about desires. Could an artificial intelligence desire plums, breakfast, or forgiveness? If computing is ultimately about abstraction, whether that’s an abstraction from flesh or the abstraction of reducing conversation to email, the central fact of desire, and of Williams’ poem, is that it defies abstraction.

According to Ullman, Breazeal thinks that computers will have desires that aren’t similar to ours: desires for repair, fuel, whatever. Ullman has trouble imagining a robot’s “interior life” and doesn’t find what she can imagine at all compelling: better to be a human and want human things than to wax ecstatic over machines that have become connoisseurs of electric current. Writers like Asimov damaged the discourse on AI before it even got started, by imagining machines that were indistinguishable from humans, without thinking enough about how machines would be different.

But I wonder: is my inability to imagine an AI with desire simply a failure of my imagination? Or is it a fundamental limitation to what is buildable?

Post topics: AI & ML
Post tags: Commentary
Share:

Get the O’Reilly Radar Trends to Watch newsletter