Advertisement

Hitting the Books: The case against tomorrow's robots looking like people

The world doesn't need more Sonny's, but a few extra Johnny 5's wouldn't hurt.

Hitting the Books: The case against tomorrow's robots looking like people
Tri-Star Pictures

Who wouldn't want an AI-driven robot sidekick; a little mechanical pal, trustworthy and supportive — the perfect teammate. But should such an automaton be invented would it really be your teammate, an equal partner in your adventurous endeavors? Or would it simply be a tool, albeit a wildly advanced one measured against today's standard? In the excerpt below from Human-Centered AI, author and professor emeritus at the University of Maryland, Ben Shneiderman, examines the pitfalls of our innate desire to humanize the mechanical constructs we build and how we are shortchanging their continued development by doing so.

Human Centered AI cover hi
University of Oxford Press

Excerpted from Human-Centered AI by Ben Shneiderman. Published by Oxford University Press. Copyright © 2021 by Ben Shneiderman. All rights reserved.


Teammates and Tele-bots

A common theme in designs for robots and advanced technologies is that human–human interaction is a good model for human–robot interaction, and that emotional attachment to embodied robots is an asset. Many designers never consider alternatives, believing that the way people communicate with each other, coordinate activities, and form teams is the only model for design. The repeated missteps stemming from this assumption do not deter others who believe that this time will be different, that the technology is now more advanced, and that their approach is novel.

Numerous psychological studies by Clifford Nass and his team at Stanford University showed that when computers are designed to be like humans, users respond and engage in socially appropriate ways. Nass’s fallacy might be described as this: since many people are willing to respond socially to robots, it is appropriate and desirable to design robots to be social or human-like.

However, what Nass and colleagues did not consider was whether other designs, which were not social or human-like, might lead to superior performance. Getting beyond the human teammate idea may increase the likelihood that designers will take advantage of unique computer features, including sophisticated algorithms, huge databases, superhuman sensors, information abundant displays, and powerful effectors. I was pleased to find that in later work with grad student Victoria Groom, Nass wrote: “Simply put, robots fail as teammates.” They elaborated: “Characterizing robots as teammates indicates that robots are capable of fulfilling a human role and encourages humans to treat robots as human teammates. When expectations go unmet, a negative response is unavoidable.”

Lionel Robert of the University of Michigan cautions that human-like robots can lead to three problems: mistaken usage based on emotional attachment to the systems, false expectations of robot responsibility, and incorrect beliefs about appropriate use of robots. Still, a majority of researchers believe that robot teammates and social robots are inevitable. That belief pervades the human–robot interaction research community which “rarely conceptualized robots as tools or infrastructure and has instead theorized robots predominantly as peers, communication partners or teammates.”

Psychologist Gary Klein and his colleagues clarify ten realistic challenges to making machines behave as effectively as human teammates. The challenges include making machines that are predictable, controllable, and able to negotiate with people about goals. The authors suggest that their challenges are meant to stimulate research and also “as cautionary tales about the ways that technology can disrupt rather than support coordination.” A perfect teammate, buddy, assistant, or sidekick sounds appealing, but can designers deliver on this image or will users be misled, deceived, and disappointed? Can users have the control inherent in a tele-bot while benefiting from the helpfulness suggested by the teammate metaphor?

My objection is that human teammates, partners, and collaborators are very different from computers. Instead of these terms, I prefer to use tele-bots to suggest human controlled devices. I believe that it is helpful to remember that “computers are not people and people are not computers.”

Figure 14.1
UOP

Margaret Boden, a long-term researcher on creativity and AI at the University of Sussex, makes an alternate but equally strong statement: “Robots are simply not people.” I think the differences between people and computers include the following:

Responsibility Computers are not responsible participants, neither legally nor morally. They are never liable or accountable. They are a different category from humans. This continues to be true in all legal systems and I think it will remain so. Margaret Boden continues with a straightforward principle: “Humans, not robots, are responsible agents.” This principle is especially true in the military, where chain of command and responsibility are taken seriously. Pilots of advanced fighter jets with ample automation still think of themselves as in control of the plane and responsible for their successful missions, even though they must adhere to their commander’s orders and the rules of engagement. Astronauts rejected designs of early Mercury capsules which had no window to eyeball the re-entry if they had to do it manually — they wanted to be in control when necessary, yet responsive to Mission Control’s rules. Neil Armstrong landed the Lunar Module on the Moon—he was in charge, even though there was ample automation. The Lunar Module was not his partner. The Mars Rovers are not teammates; they are advanced automation with an excellent integration of human tele-operation with high levels of automatic operation.

It is instructive that the US Air Force shifted from using the term unmanned autonomous/aerial vehicles (UAVs) to remotely piloted vehicles (RPVs) so as to clarify responsibility. Many of these pilots work from a US Air Force base in Nevada to operate drones flying in distant locations on military missions that often have deadly outcomes. They are responsible for what they do and suffer psychological trauma akin to what happens to pilots flying aircraft in war zones. The Canadian Government has a rich set of knowledge requirements that candidates must have to be granted a license to operate a remotely piloted aircraft system (RPAS).13 Designers and marketers of commercial products and services recognize that they and their organizations are the responsible parties; they are morally accountable and legally liable.14 Commercial activity is further shaped by independent oversight mechanisms, such as government regulation, industry voluntary standards, and insurance requirements.

Distinctive capabilities Computers have distinctive capabilities of sophisticated algorithms, huge databases, superhuman sensors, information-abundant displays, and powerful effectors. To buy into the metaphor of “teammate” seems to encourage designers to emulate human abilities rather than take advantage of the distinctive capabilities of computers. One robot rescue design team described their project to interpret the robot’s video images through natural language text messages to the operators.The messages described what the robot was “seeing” when a video or photo could deliver much more detailed information more rapidly. Why settle for a human-like designs when designs that make full use of distinctive computer capabilities would be more effective.

Designers who pursue advanced technologies can find creative ways to empower people so that they are astonishingly more effective—that’s what familiar supertools have done: microscopes, telescopes, bulldozers, ships, and planes. Empowering people is what digital technologies have also done, through cameras, Google Maps, web search, and other widely used applications. Cameras, copy machines, cars, dishwashers, pacemakers, and heating, ventilation, and air conditioning systems (HVAC) are not usually described as teammates—they are supertools or active appliances that amplify, augment empower, and enhance people.

Human creativity The human operators are the creative force — for discovery, innovation, art, music, etc. Scientific papers are always authored by people, even when powerful computers, telescopes, and the Large Hadron Collider are used. Artworks and music compositions are credited to humans, even if rich composition technologies are heavily used. The human qualities such as passion, empathy, humility, and intuition that are often described in studies of creativity are not readily matched by computers. Another aspect of creativity is to give human users of computer systems the ability to fix, personalize, and extend the design for themselves or to provide feedback to developers for them to make improvements for all users. The continuous improvement of supertools, tele-bots, and other technologies depends on human input about problems and suggestions for new features. Those who promote the teammate metaphor are often led down the path of making human-like designs, which have a long history of appealing robots, but succeed only as entertainment, crash test dummies, and medical mannequins. I don’t think this will change. There are better designs than human-like rescue robots, bomb disposal devices, or pipe inspectors. In many cases four-wheeled or treaded vehicles are typical, usually tele-operated by a human controller.

The DaVinci surgical robot is not a teammate. It is a well-designed tele-bot that enables surgeons to perform precise actions in difficult to reach small body cavities (Figure 14.1, above). As Lewis Mumford reminds designers, successful technologies diverge from human forms. Intuitive Surgical, the developer of the DaVinci systems for cardiac, colorectal, urological, and other surgeries, makes clear that “Robots don’t perform surgery. Your surgeon performs surgery with Da Vinci by using instruments that he or she guides via a console.”

Many robotic devices have a high degree of tele-operation, in which an operator controls activities, even though there is a high degree of automation. For example, drones are tele-bots, even though they have the capacity to automatically hover or orbit at a fixed altitude, return to their take-off point, or follow a series of operator-chosen GPS waypoints. The NASA Mars Rover vehicles also have a rich mixture of tele-operated features and independent movement capabilities, guided by sensors to detect obstacles or precipices, with plans to avoid them. The control centers at NASA’s Jet Propulsion Labs have dozens of operators who control various systems on the Rovers, even when they are hundreds of millions of miles away. It is another excellent example of combining high levels of human control and high levels of automation.

Terms like tele-bots and telepresence suggest alternative design possibilities. These instruments enable remote operation and more careful control of devices, such as when tele-pathologists control a remote microscope to study tissue samples. Combined designs take limited, yet mature and proven features of teammate models and embed them in devices that augment humans by direct or tele-operated controls.

Another way that computers can be seen as teammates is by providing information from huge databases and superhuman sensors. When the results of sophisticated algorithms are displayed on information-abundant displays, such as in three-dimensional medical echocardiograms with false color to indicate blood flow volume, clinicians can be more confident in making cardiac treatment decisions. Similarly, users of Bloomberg Terminals for financial data see their computers as enabling them to make bolder choices in buying stocks or rebalancing mutual fund retirement portfolios (Figure 14.2, below). The Bloomberg Terminal uses a specialized keyboard and one or more large displays, with multiple windows typically arranged by users to be spatially stable so they know where to find what they need. With tiled, rather than overlapped, windows users can quickly find what they want without rearranging windows or scrolling. The voluminous data needed for a decision is easily visible and clicking in one window produces relevant information in other windows. More than 300,000 users pay $20,000 per year to have this supertool on their desks.

Figure 14.2
UOP

In summary, the persistence of the teammate metaphor means it has appeal for many designers and users. While users should feel fine about describing their computers as teammates, designers who harness the distinctive features of computers, such as sophisticated algorithms, huge databases, superhuman sensors, information-abundant displays, and powerful effectors may produce more effective tele-bots that are appreciated by users as supertools.