Google's Learning Software Learns to Write Learning Software

Google’s researchers have taught machine-learning software to build machine-learning software, in a project dubbed AutoML.
Image may contain Hand Light and Text
Hotlittlepotato

White-collar automation has become a common buzzword in debates about the growing power of computers, as software shows potential to take over some work of accountants and lawyers. Artificial-intelligence researchers at Google are trying to automate the tasks of highly paid workers more likely to wear a hoodie than a coat and tie—themselves.

In a project called AutoML, Google’s researchers have taught machine-learning software to build machine-learning software. In some instances, what it comes up with is more powerful and efficient than the best systems the researchers themselves can design. Google says the system recently scored a record 82 percent at categorizing images by their content. On the harder task of marking the location of multiple objects in an image, an important task for augmented reality and autonomous robots, the auto-generated system scored 43 percent. The best human-built system scored 39 percent.

Such results are significant because the expertise needed to build cutting-edge AI systems is in scarce supply—even at Google. “Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” said Google CEO Sundar Pichai last week, briefly namechecking AutoML at a launch event for new smartphones and other gadgets. “We want to enable hundreds of thousands of developers to be able to do it.”

AutoML remains a research project. Somewhat ironically, right now it takes exactly the kind of rare AI expertise this technology seeks to automate to get it working. But a growing number of researchers outside Google are working on this technology, too. If AI-made AI becomes practical, machine learning could spread outside of the tech industry, for example in healthcare and finance, much faster.

At Google, AutoML could accelerate Pichai’s “AI first” strategy, through which the company is using machine learning to run more efficiently and create new products. Researchers from the company’s Google Brain research group or the London-based DeepMind research lab it acquired in 2014 have helped slash power bills in company data centers, and sped up Google’s ability to map new cities, for example. AutoML could make those experts more productive, or help less-skilled engineers build powerful AI systems by themselves.

Google lists just over 1,300 people on its research website, not all of whom specialize in AI. It has many thousands more software engineers. Google parent Alphabet has 27,169 employees engaged in research and development, according to its most recent annual financial filing.

Google declined to make anyone available to discuss AutoML. Researchers outside the company say the idea of automating some work of AI experts has become a research hotspot—and is needed as AI systems become more complex.

Much work in what is called metalearning or learning to learn, including Google’s, is aimed at speeding up the process of deploying artificial neural networks. That technique involves feeding data through networks of math operations loosely inspired by studies of neurons in the brain.

That may sound highly sophisticated, but a good part of getting neural networks to perform useful tricks like processing audio comes down to well-paid grunt work. Experts must use instinct and trial and error to discover the right architecture for a neural network. “A large part of that engineer’s job is essentially a very boring task, trying multiple configurations to see which ones work better,” says Roberto Calandra, a researcher at University of California Berkeley. The challenge is getting harder, he says, because researchers are building larger networks to tackle tougher problems.

Calandra began researching metalearning after spending two frustrating weeks trying to get a robot to learn to walk during his PhD studies in 2013. He tried an experimental technique to automatically tune its software, which was based on a machine learning technique less complex than a neural network. The recalcitrant machine walked within a day.

Generating a neural-network design from scratch is harder than tweaking the settings of one that already exists. But recent research results suggest it’s getting closer to becoming practical, says Mehryar Mohri, a professor at NYU.

Mohri is working on a system called AdaNet, in a collaboration that includes researchers at Google’s New York office. When given a collection of labeled data, it builds a neural network layer by layer, testing each addition to the design to ensure it improves performance. AdaNet has shown capable of generating neural networks that can accomplish a task as well as a standard, hand-built network that’s twice as large. That’s promising, says Mohri, because many companies are trying to cram more powerful AI software onto mobile devices with limited resources.

Making it easier to generate and deploy complex AI systems might come with drawbacks. Recent research has shown that it is all too easy to accidentally make systems with a biased view of the world, for example that “Mexican” is a bad word, or have a tendency to associate women with domestic chores. Mohri argues that reducing the tedious hand-tuning required to make use of neural networks could make it easier to detect and prevent such problems. “It’s going to make people’s hands more free to tackle other aspects of the problem,” he says.

If and when Google gets AutoML working well enough to be a practical tool for programmers, its effects could be felt beyond the company itself. Pichai hinted last week that he wanted to make the tool available outside of Google. “We want to democratize this,” he said, echoing lofty language used to promote AI services offered by his cloud computing unit.