The New Laws
of Robotics
black bar with dot at the end
black bar with dot at the end
by Professor Frank Pasquale
After decades of technological breakthroughs, artificial intelligence is now a part of our daily lives. Guiding principles for AI law and policy are essential if we are to capitalize on its immense potential, while continuing to center human expertise in the economy and stave off the rise of the machines.
The stakes of technological advance rise daily. Combine facial recognition databases with ever-cheaper micro-drones, and you have an anonymous global assassination force of unprecedented precision and lethality. But what can kill can also cure; robots could vastly expand access to medicine if we invested more in researching and developing them. Already, businesses are taking thousands of small steps toward automating hiring, customer service, and even management.

All these developments change the balance between machines and humans in the ordering of our daily lives. Right now, artificial intelligence and robotics most often complement, rather than replace, human labor. In many areas, we should use our existing institutions of governance to maintain this status quo. Avoiding the worst outcomes in the AI revolution while capitalizing on its potential will depend on our ability to cultivate wisdom about this balance.

However, attaining this result will not be easy. A narrative of mass unemployment now grips policymakers, who are envisioning a future where human workers are rendered superfluous by ever-more-powerful software, robots, and predictive analytics that perform jobs just as well at a fraction of present wages. This vision offers stark alternatives: make robots, or be replaced by them.

Another story is possible and, indeed, more plausible. In virtually every walk of life, robotic systems can make labor more valuable, not less. Even now, doctors, nurses, teachers, home health aides, journalists, and others are working with roboticists and computer scientists to develop tools for the future of their professions, rather than meekly serving as data sources for their future replacements. Their cooperative relationships prefigure the kind of technological advance that could bring better healthcare, education, and more to all of us, while maintaining meaningful work.

They also show how law and public policy can help us achieve peace and inclusive prosperity, rather than a “race against the machines.” We can do so only if we update the laws of robotics that guide our vision of technological progress.

The Old Laws
In the 1942 short story “Runaround,” science fiction writer Isaac Asimov delineated three laws for his mechanical characters:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov’s laws of robotics have been enormously influential for science fiction writers and the technologists inspired by them. They seem clear-cut, but they are not easy to apply. Consider, for instance, whether Asimov’s laws allow robotic cars. Self-driving vehicles promise to eliminate many thousands of traffic fatalities each year, but may also put hundreds of thousands of paid drivers out of work. Does that harm entitle governments to ban or slow down the adoption of self-driving cars? These ambiguities, and many more, are why the statutes, regulations, and court cases affecting robotics and AI in our world are more fine-grained than Asimov’s laws.

I propose four new laws of robotics to guide us on the road ahead. They are directed toward the people building robots, not the robots themselves, and better reflect how actual lawmaking is accomplished.

Rule One: Robotic systems and AI should complement professionals, not replace them
For policymakers, it is still an open question which barriers to robotization make sense, and which deserve scrutiny and removal. Robotic meatcutters make sense; robotic day care gives us pause. Is this caution mere Luddite reaction, or reflective of a deeper wisdom about the nature of childhood?

Numerous factors matter in the rush to automation, many specific to jobs and jurisdictions. But one organizing principle is the importance of meaningful work to the self-worth of persons and the governance of communities. A humane agenda for automation would prioritize innovations that complement workers in jobs that are, or ought to be, fulfilling vocations. It would substitute machines to do dangerous or degrading work, while ensuring those presently doing that work are fairly compensated for their labor and offered a transition to other social roles.

Rule Two: Robotic systems and AI should not counterfeit humanity
From Asimov’s time to the vertiginous mimicry of Ex Machina and Westworld, the prospect of humanoid robots has been both fascinating and frightening. Machine learning programs have already mastered the art of creating pictures of “fake people,” and convincing synthetic voices may soon become common. As engineers scramble to fine-tune these algorithms, a larger question goes unasked: Do we want to live in a world where human beings do not know whether they are dealing with a fellow human or a machine?

Despite the growing ethical consensus for the disclosure of the use of algorithms and smart machines in interactions with users, there are subfields of AI devoted to making it ever more difficult for us to distinguish between humans and machines. These research projects might culminate in a creation like the advanced androids of our science fiction films, indistinguishable from a human being. Yet, in hospitals, schools, police stations, and even manufacturing facilities, there is little to gain by embodying software in humanoid bodies, and plenty to lose.

Rule Three: Robotic systems and AI should not intensify zero-sum arms races
Debates over “killer robots” are a central theater for ethics in international law. There are many scenarios where a future arms race could begin. As AI and robotics enter the picture, the stakes of falling behind one’s rivals rise, since emerging technologies promise to be much more targeted, ubiquitous, and rapidly deployed. These technologies and tactics, including new weapons systems, automated cyberattacks, and disinformation campaigns, threaten to disrupt long-settled expectations about the purpose and limits of international conflict. We must find new ways of limiting their development and impact.

Deadly and invasive technologies pioneered by armies could be used beyond the battlefield. Today, more law enforcement agencies aim to use facial recognition to scan crowds for criminals. In China, the government utilizes “social credit scores” created from surveillance data to determine what trains or planes a citizen can board, what hotels a person can stay in, and what schools a family’s children can attend.

Some applications of these systems may be quite valuable, such as public health surveillance that accelerates contact tracing to stop the spread of infectious disease. However, when the same powerful capacities are ranking and rating everyone at all times, they become oppressive.

Rule Four: Robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s)
There is a nebulous notion of “out of control” robots that escape their creator. Perhaps such accidents are unavoidable. Nevertheless, some person or entity should be responsible for them. A requirement that any AI or robotics system has some designated party responsible for its action would help squelch such projects, which could be just as dangerous as unregulated bioengineering of viruses.

Of course, some robots and algorithms will evolve away from the ideals programmed into them by their owners, as a result of interactions with other persons and machines. Whatever affects the evolution of such machines, the original creator should be obliged to build in certain constraints on the code’s evolution to both record influences and prevent bad outcomes. Once another person or entity hacks into or disables those constraints, the hacker is responsible for the robot’s wrongdoing.

A Future of Opportunity
New Laws of Robotics: Defending Human Expertise in the Age of AI book cover
Buy New Laws of Robotics at the Brooklyn Law School Bookshop button
Conversations about robots usually tend toward the utopian (“machines will do all the dirty, dangerous, or difficult work”) or the dystopian (“…and all the rest, creating mass unemployment”). But the future of automation in the workplace—and well beyond—will hinge on millions of small decisions about how to develop AI. How far should machines be entrusted to take over tasks previously performed by humans? What is gained and lost when they do so? What is the optimal mix of robotic and human interaction? And how do various rules—whether codes of professional ethics, insurance policies, or statutes—influence the scope and pace of robotization in our daily life? Answers to these questions can substantially determine whether automation promises a robot revolution or a slow, careful improvement in how work is done.

Too many technologists aspire to rapidly replace human beings in areas where we lack the data and algorithms to do the job well. Meanwhile, politicians have tended toward fatalism, routinely lamenting that regulators and courts cannot keep up with technological advance.

Both triumphalism in the tech community and minimalism among policymakers are premature. As robots enter the workforce, we have a golden opportunity to shape their development with thoughtful legal standards for privacy and consumer protection. We can channel technology through law. We can uphold a culture of maintenance over disruption, of complementing human beings rather than replacing them. We can attain and afford a world ruled by persons, not machines. The future of robotics can be inclusive and democratic, reflecting the efforts and hopes of all citizens. And new laws of robotics can guide us on this journey.

Excerpted from New Laws of Robotics: Defending Human Expertise in the Age of AI by Frank Pasquale, published by the Belknap Press of Harvard University Press. Copyright © 2020 by the President and Fellows of Harvard College. Used by permission. All rights reserved.
black bar with dot at the end
black bar with dot at the end
Frank Pasquale headshot
Frank Pasquale, professor of law, is a noted authority and scholar on the law of artificial intelligence, algorithms, and machine learning, focusing on how information is used across areas including health law, commerce, and tech. His previous book, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015), has been recognized internationally as a landmark study on how “big data” affects our lives. He is also coeditor of The Oxford Handbook of Ethics of AI (Oxford University Press, 2020). Pasquale chairs the Subcommittee on Privacy, Confidentiality, and Security, part of the National Committee on Vital and Health Statistics, where he is serving a four-year term.

Follow @FrankPasquale on Twitter