BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why Regulating AI Is A Mistake

POST WRITTEN BY
Amitai Etzioni and Oren Etzioni
This article is more than 7 years old.

President-elect Trump has met with leaders in technology in an effort to open lines of communication and discuss business after months of two-way criticism. It’s no secret that Silicon Valley was largely in support of Hillary Clinton, who had aligned herself with the technology community, while over the last few years Mr. Trump has criticized Apple’s iPhones, accused Facebook, Google, and Twitter of burying negative news about Democrats, and picked a fight with Jeff Bezos on Twitter insinuating his purchase of the Washington Post was for political influence to help Amazon’s business. While this meeting simply serves to smooth over relations with the technology community, there is a longer conversation needed with President-elect Trump, who’s presidency sits at a tipping point in technology. In the next four years we will see an explosion of AI technology that further delivers on the promise of driverless cars, intelligent robots, and other societal and job-impacting advancements. The conversation needed is how to, or more precisely, how not to regulate AI.

The recent killing of a passenger by a Tesla car, which was speeding while in autopilot mode, has lit a fire among those that call for the regulation of AI. Among those who call for such regulation are Elon Musk, several renowned scholars such as Stephen Hawking, and-- the UN. A leading attorney, writing in the Harvard Law and Technology Review, even calls for setting up a new FDA to “certify” the safety of various AI programs. All these arguments deserved to be critically examined, even if the incoming Congress and the new President are hardly favorable to new regulations.

AI use will need some kind of oversight but hardly a regulatory regime. Oversight is called for because, over the coming decades, AI equipped (‘smart’) machines will increasingly acquire two unique attributes: some degree of autonomous decision making and the ability to learn from experience. As a result, over time, smart machines could stray from their programmers’ instruction further than happens at present.. For instance, some speculate that in the future it will not suffice to instruct a driverless car to abide by the speed limit as it will observe that other cars disregard the speed limit, and follow suit.

Instead, we argue for an ambitious research program on automated AI oversight systems. As in the non-AI world, first line operators are subject to oversight by a second layer. Workers have supervisors; businesses have accountants; school teachers have principals. We suggest that the time has come to develop AI oversight systems (“AI Guardians”) that will seek to ensure that the various smart machines will not stray from the guidelines their programmers have provided. For the driverless car this means adding to the AI program that drives the car, one that will prevent it from speeding, tailgating, etc. Such a program will also enable authorities to determine, when a crash occurs, who is at fault? The programmer, other cars and their drivers—or what the smart car learned and concluded on its own?

Some limited adjustment in laws may be called for, mainly in state laws. Thus, once there are a large number of driverless cars on the road and they are programed to coordinate with one another; local authorities may wish to dedicate a special lane for these cars, to allow them to travel more rapidly than regular cars. And, if one could obtain and enforce an international agreement not to deploy weapons that chose their own targets and cannot be recalled, this might well be a worthwhile measure.

Elon Musk, Steven Hawking, and several AI researchers are concerned about something rather different: they fear that AI-equipped machines threaten to become so smart that they will surpass human intelligence. Next, they suggested, these instruments may well rebel against their makers and take over—if not destroy—the world. Rory Cellan-Jones believes "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.” Stephen Hawking states: "One can imagine [AI] outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.” Nick Bostrom’s favorite apocalyptic hypothetical involves a machine that has been programmed to make paper clips, keeps going, eventually deciding to turn everything on Earth — including the human race — into paper clips.

For now, such concerns belong into Sci FI movies. Moreover, these arguments may be used by contemporary Luddites to call for government regulation of a vibrant branch of science and technology, just as it is taking off. President-elect Trump must understand that regulation may lead other nations to overtake us, just AI is starting to make cars safer, enabling surgery no human can carry out, and making smart homes more energy efficient.