Google Says It Wants Rules for the Use of AI—Kinda, Sorta

In a new white paper, Google suggests tech companies are best left to themselves on how to deploy AI, but highlights areas where government might help.
mount rushmore with colorful circles around each face
Alyssa Foote; Getty Images

Last April, Google cofounder Sergey Brin wrote to shareholders with a warning about the potential downsides of artificial intelligence. In June, Google CEO Sundar Pichai released a set of guiding principles for its AI projects after employee protests forced him to abandon a Pentagon contract creating algorithms to interpret drone footage. Now Google has released a white paper that asks governments to suggest some rules for AI—but please, not too many!

As you might expect, the 30-page document Google released last week extols the power of artificial intelligence. “AI can deliver great benefits for economies and society, and support decision making which is fairer, safer and more inclusive and informed,” it says. The paper goes on to argue that the downsides of that awesome power can be avoided without additional regulation “in the vast majority of instances.”

Lawmakers and governments are showing a growing interest in imposing limits on uses of AI. A San Francisco politician recently proposed a ban on the use of facial recognition by city agencies, and French president Emmanuel Macron has talked about creating new regulations around the technology.

Charina Choi, Google’s global policy lead for emerging technologies, says one motivation of the report is to offer governments advice on where their input would be most useful. “We’ve been hearing a lot of governments say, 'What can we do, practically speaking?'" says Choi, a coauthor of the report. For now, she says, the answer isn’t to immediately draft new rules on where and how AI algorithms can be used.

"At this time, it’s not necessarily super obvious what things should be regulated and what shouldn't,” Choi says. "The aim of this paper is to really think about: What are the types of questions that policymakers need to answer and [decisions] we as a society have to make?" To make those decisions, the paper says, input from civil society groups and researchers outside the industry will also be needed.

Areas where Google invites government rules or guidance include safety certifications for some products with AI inside, like the CE mark used to indicate compliance with safety standards on products in Europe. The white paper offers the example of smart locks that use biometric data, such as face images or thumbprints.

A safety mark might indicate that a lock’s AI has been tested to work accurately against a representative sample of people, the paper says. Studies have found that machine learning algorithms can pick up and even amplify societal biases, and that facial analysis algorithms perform better on white people than those with darker skin. Experiments by the ACLU last year found that a facial recognition service Amazon has sold to police departments made more errors for black faces.

Google’s white paper comes amid calls for ethical and regulatory guardrails on uses of the technology from researchers, academics, and, more recently, even tech companies themselves. Amazon has said it is “very interested” in working with policymakers on guidance or legislation for facial recognition. Microsoft has gone further, calling for federal legislation on facial recognition, including a requirement for “conspicuous notice” where it’s in use.

Google’s paper is much broader in scope than Microsoft’s proposals on facial recognition, and considers more AI uses and concerns. It’s also more cautious, and doesn’t strongly advocate for specific new regulations. The search company champions self-regulation, highlighting how it has chosen not to offer a general-purpose facial recognition service—as Microsoft and Amazon do—due to concerns it could be used to “carry out extreme surveillance.” The paper also says Google has limited some of the AI research code it has released, to reduce the risk of misuse.

The search company asks for government guidance on when and how AI systems should explain their decisions—for example, when declaring that a person’s cancer appears to have returned. The document proposes that governments and civil society groups could set “minimum acceptable standards” for algorithmic explanations for different industries.

Google’s policy paper also muses on the challenge of balancing the roles of people and algorithms in making decisions; it suggests that humans should always be “meaningfully involved” in decisions involving criminal law or life-altering medical issues. The company also invites government to consider whether some AI regulation should in fact constrain humans, for example by barring them from turning off AI safety systems that may be more reliable than people.

People thinking about AI policy outside of Google say the company’s white paper is a positive but still preliminary step toward engaging with the challenges AI may pose to society.

Much discussion of AI ethics and policy from companies and governments has been too platitudinal and insufficiently practical, says Sandra Wachter, a researcher at Oxford University’s Internet Institute. “We need to move away from these high-level abstract ideas, where everybody says that AI should be fair,” she says.

Google’s paper shows the company attempting to talk more specifically, but doesn’t go very far, Wachter says. “I think it’s a good initial list. Where I’d say there is still a gap is how to govern those things.” In some cases, such as how AI systems explain critical decisions in areas like health, she advocates firm regulation, something that Google and other companies seem loath to consider. “With explanations, I don’t want to see a code of conduct, I want to see hard laws, because it’s a human rights issue,” Wachter says.

Google’s next moves will be watched closely. Eleonore Pauwels, who leads a project on AI governance at the United Nations University Centre for Policy Research, says the document is a good first step, but the company needs to prove it will lead somewhere.

Pauwels would like to see Google engage more meaningfully with outsiders about the uses and societal effects of the technology it is developing. The way Google scrambled to address public and employee outcry over its humanlike phone bots and the Pentagon project last year suggest this impulse doesn’t come naturally. Pauwels says health care, an area where Google is ramping up AI projects in search of new revenue streams, is an area of particular concern. “We’re going to see a lot of incredibly personal and intimate data used in new ways in those products,” she says.


More Great WIRED Stories