Empathy in the Age of AI

Anthropomorphism was once considered a danger to the animal world. It's not so simple.
Photo collage of an AI face a humanoid robot orangutans kissing and a child looking at a dolphin in an aquarium
Photo-illustration: WIRED Staff; Getty Images

If you think your dog loves you, you’re a fool. If you feel a kinship with a tree, you’re a hippie. And if you over-empathize with a wild animal, you must be wearing cheetah prints and a flower crown, because you are Carole Baskin. The imperative to be on guard against anthropomorphism infuses almost every aspect of modern life. Yet many people would struggle to articulate why, exactly, attributing human qualities to nonhuman entities—from gorillas to large language models—is so woefully naive. 

Anti-anthropomorphism has deep roots. In the 20th century, scientists sallied forth on a quixotic quest to see animals objectively. To do it, they tried to strip away human assumptions about biology, social structure, animal behavior, and more. Eventually, this ideal became a dominant ideology, says ecologist Carl Safina. At one point, anthropomorphism was called the “worst of ethological sins” and a danger to the animal world. But the next generation of field ecologists, including Jane Goodall and Frans De Waal, pushed back, infusing their observation with empathy. “I don’t know people anymore who study animals and insist that anthropomorphism is out of bounds,” says ecologist Carl Safina.

Still, play-acting a vigilant anti-anthropomorphism still comes off as enlightened in certain circles—in conversations about animals and, increasingly, about artificial intelligence. As machines get better and better at mimicking humans, from the artistry of DALL-E to the life-like interlocutor ChatGPT, we appear more inclined to see our ghost in every machine. Do existing technologies really “think” or “see”? Did the Amazon Echo really need a human name? According to some scholars, projecting our humanity onto AI could have real consequences, from further obscuring the way these minds actually function to reinforcing a dubious notion of the human mind as a sole, or superior, model of intelligence.

But anthropomorphism is a tool like any other—used to better and worse ends, in humanity’s endless pursuit to understand a complicated world. Figuring out when and how to apply such a tool is more urgent than ever, as the mass extinction snuffs out nonhuman intelligence, and new artificial systems come on line every day. How we interact with these entities, both animal and artificial, is fast becoming one of the defining challenges of this century. 

At its most basic, anthropomorphism is a form of metaphorical thinking that enables us to draw comparisons between ourselves and the world around us. It can also be understood as one of countless byproducts of what neuroscientists called theory of mind—the ability to distinguish one’s mind from the minds of others, and then infer what those others are thinking or feeling. 

Theory of mind is an important precept in all kinds of human social interaction, from empathy to deception. Even so, it remains an imperfect instrument. “The easiest access we have is to ourselves,” says Heather Roff, a researcher focused on the ethics of emerging technology. “I have a theory of mind because I know me, and you are sufficiently like me.” But an n of 1 is a fragile thing, and anyone can find themselves stumped by an individual they deem “unreadable” or by the “shock” of a culture very different from their own. 

Despite these challenges, humans appear to be driven to see others as minded (or, put another way, to perceive persons). We seem to reflexively believe that other entities have their own thoughts and emotions. At the same time, many people internalize beliefs that contradict the capacity for identifying personhood and routinely deny the mindedness of children, women, people of color, people with mental illness or developmental disability, and nonhuman animals. 

In the face of such erasure, anthropomorphism can seem almost virtuous. We should see ourselves in all manner of others! Sy MontgomerySabrina Imbler, and Ed Yong are just a few of the contemporary voices advocating for a radical interspecies empathy. In Braiding SweetgrassRobin Wall Kimmerer, a botanist and member of the Citizen Potawatomi Nation, writes about the divide in Western scientific and Indigenous views of nature: as “object” vs. “subject,” as noun vs. verb, as inert substance vs. a being with agency—or, in Kimmerer’s words, animacy. 

Machine intelligence complicates this call to see personhood in the world around us. Despite claims that Google’s LaMDA is not just sentient but has a soul, most theorists believe that these and other hallmarks of consciousness (or something like it) are only decades away. As it stands, existing AI is actually pretty stupid, and entirely dependent on humans for further development. It may excel in a specific domain, but we have nothing near generalized, let alone super, intelligence. Even then, the limitations are profound; ChatGPT may spit out convincing text, but it doesn’t understand a word it has said. 

Most of AI’s shortcomings—and strengths—are poorly understood by the general public (and sometimes even by the supposed experts). At times, AI’s capacities appear to even be intentionally dramatized. And many projects are explicitly modeled on human cognition and are designed to mimic human behaviors, making it hard to truly dismiss the like-mindedness one might feel in a social media algorithm or Google search recommendation, even if it’s ultimately undeserved. The end result is that many people are eager to ascribe mindedness to pieces of machinery and bits of code.

There are real reasons to resist this impulse. AI’s ethical problems currently reside in how humans use these technologies against other humans—not in the legal or moral “rights” of the AI itself. We don’t need to worry about “AI killer robots” nearly as much as we need to worry about humans using robots to kill. And while AI might effectively imitate aspects of human intelligence, it operates in meaningfully different ways. DALL-E has no hands to grasp a paintbrush, let alone an artistic vision of its own to execute; it’s a statistical model trained to emulate human artists. That’s a fundamentally different way of “creating,” with ramifications all its own. 

We probably won’t want to build AI that copies us for much longer, either. “If I’m optimizing for something, I want it to be better than my own senses,” Roff says. AI of the future should be like the dolphins she trained to use echolocation to detect land mines for the US military: “They don’t perceive like us,” Roff says, and that’s the point. 

The cultural fixation on anthropomorphism has allowed people to overlook an altogether more threatening bias: anthropofabulation. The clunky term, developed by philosopher Cameron Buckner, describes the tendency to use an inflated sense of human potential as the ruler by which we measure all other forms of intelligence. In this framework, humans undermine dolphin minds and overstate artificial intelligence for the same reason: When we see ourselves as the best, we think whatever is more like us is better.

Ironically, anthropomorphism, or tactics like it, might be one way of reducing the harms of such rank elitism. By understanding how our own theory of mind makes sense of the “other” (or fails to do so) and appreciating the variety of intelligences already on Earth, we can begin to relate to other entities more responsibly. When it comes to the animal world, there are a dozen ways to anthropomorphize with caution. There is a spiritual path, evident in Kimmerer’s work. Imbler recently argued for intimacy with sea blobs, which are both confounding and, as with all life on Earth, kin. And Yong’s recent work draws on studies of bat’s infrared vision or dog’s olfaction to help readers see animals as they see themselves

These approaches are all rooted in empathy, and also a kind of objectivity that flows from a commitment to witness both similarity and difference. “If you observe other animals, and you conclude that they have thoughts and emotions, then that’s not projecting,” Safina says, “that’s observing.” 

AI will require a more subtle application of these principles. By and large, anthropomorphism and anthropofabulation distract us from seeing AI as it actually is. As AI grows more intelligent, and our understanding of it deepens, our relationship to it will necessarily change. By 2050, the world may need a Jane Goodall for robots. But for now, projecting humanity onto technology obscures more than it reveals.