Skip to main content

An AI speed test shows clever coders can still beat tech giants like Google and Intel

An AI speed test shows clever coders can still beat tech giants like Google and Intel

Share this story

Robots Fight At ROBO-ONE In Tokyo
Photo by Matt Roberts/Getty Images

There is a common narrative in the world of AI that bigger is better. To train the fastest algorithms, they say, you need the most expansive datasets and the beefiest processors. Just look at Facebook’s announcement last week that it created one of the most accurate object recognition systems in the world using a dataset of 3.5 billion images. (All taken from Instagram, naturally.) This narrative benefits tech giants, helping them attract talent and investment, but a recent AI competition organized by Stanford University shows the conventional wisdom isn’t always true. Fittingly enough for the field of artificial intelligence, it turns out brains can still beat brawn.

The proof comes from the DAWNBench challenge, which was announced by Stanford researchers last November and the winners declared last week. Think of DAWNBench as an athletics meet for AI engineers, with hurdles and long jump replaced by tasks like object recognition and reading comprehension. Teams and individuals from universities, government departments, and industry competed to design the best algorithms, with Stanford’s researchers acting as adjudicators. Each entry had to meet basic accuracy standards (for example, recognizing 93 percent of dogs in a given dataset) and was judged on metrics like how long it took to train an algorithm and how much it cost.

These metrics were chosen to reflect the real-world demands of AI, explain Stanford’s Matei Zaharia and Cody Coleman. “By measuring the cost [...] you can find out if you’re a smaller group if you need Google-level infrastructure to compete,” Zaharia tells The Verge. And by measuring training speed, you know how long it takes to implement an AI solution. In other words, these metrics help us judge whether small teams can take on the tech giants.

The results don’t give a straightforward answer, but they suggest that raw computing power isn’t the be-all and end-all for AI success. Ingenuity in how you design your algorithms counts for at least as much. While big tech companies like Google and Intel had predictably strong showings in a number of tasks, smaller teams (and even individuals) ranked highly by using unusual and little-known techniques.

Take, for example, one of DAWNBench’s object recognition challenges, which required teams to train an algorithm that could identify items in a picture database called CIFAR-10. Databases like this are common in AI, and are used for research and experimentation. CIFAR-10 is a relatively old example, but mirrors the sort of data a real company might expect to deal with. It contains 60,000 small images, just 32 pixels by 32 pixels in size, with each picture falling into one of ten categories such as “dog,” “frog,” “ship,” or “truck.”

“world class results using basic resources.”

In DAWNBench’s league tables, the top three spots for fastest and cheapest algorithms to train were all taken by researchers affiliated with one group: Fast.AI. Fast.AI isn’t a big research lab, but a non-profit group that creates learning resources and is dedicated to making deep learning “accessible to all.” The institute’s co-founder, entrepreneur and data scientist Jeremy Howard, tells The Verge that his students’ victory was down to thinking creatively, and that this shows that anyone can “get world class results using basic resources.”

Howard explains that in order to create an algorithm for solving CIFAR, Fast.AI’s group turned to a relatively unknown technique known as “super convergence.” This wasn’t developed by a well-funded tech company or published in a big journal, but was created and self-published by a single engineer named Leslie Smith working at the Naval Research Laboratory.

Essentially, super convergence works by slowly increasing the flow of data used to train an algorithm. Think of it like this, if you were teaching someone to identify trees, you wouldn’t start by showing them a forest. Instead, you’d introduce information to them slowly, starting by teaching them what individual species and leaves look like. This is a bit of a simplification, but the upshot is that by using super convergence, Fast.ai’s algorithms were considerably speedier than the competition’s. They were able to train an algorithm that could sort CIFAR with the required accuracy in just under three minutes. The next fastest team that didn’t use super convergence took more than half an hour.

It didn’t all go Fast.AI’s way though. In another challenge using object recognition to sort through a database called ImageNet, Google romped home, taking the top three positions in training time, and the first and second in training cost. (Fast.AI took third place in cost and fourth place in time.) However, Google’s algorithms for were all running on the company’s custom AI hardware; chips designed specially for the task known as Tensor Processing Units or TPUs. In fact, for some of the tasks Google used what it calls a TPU “pod,” which is 64 TPU chips running in tandem. . By comparison, Fast.AI’s entries used regular Nvidia GPUs running off a single bog-standard PC; hardware that’s more readily available to all.

Google’s Tensor Processing Units (or TPUs) are specially chips available only from Google.
Google’s Tensor Processing Units (or TPUs) are specially chips available only from Google.
Photo: Google

“The fact that Google has a private infrastructure that can train things easily is interesting but perhaps not completely relevant,” says Howard. “Whereas, finding out you can do much the same thing with a single machine in three hours for $25 is extremely relevant.”

These ImageNet results are revealing precisely because they’re ambiguous. Yes, Google’s hardware reigned supreme, but is that a surprise when we’re talking about one of the richest tech companies in the world? And yes, while Fast.AI’s students did come up with a creative solution, it’s not that Google’s wasn’t also ingenious. One of the company’s entries made use of what it calls “AutoML” — a set of algorithms that search for the best algorithm for a given task without human direction. In other words, AI that designs AI.

The challenge of understanding these results is just not a matter of finding out who’s best — they have clear social and political implications. For example, consider the question of who controls the future of artificial intelligence. Will it be big tech companies like Amazon, Facebook, and Google, who will use AI to increase their power and wealth; or will the benefits be more evenly and democratically available?

For Howard, these are crucial questions. “I don’t want deep learning to remain the exclusive venue of a small number of privileged people,” he says. “It really bothers me, talking to young practitioners and students, this message that being big is everything. It’s a great message for companies like Google because they get to recruit folks because they believe that unless you go to Google you can’t do good work. But it’s not true.”

Will AI’s power be controlled by big tech or distributed evenly?

Sadly, we can’t be AI soothsayers. No one can predict the future of the industry by examining the bones of the DAWNBench challenge. And indeed, if the results of this competition show anything, it’s that this is a field still very much in flux. Will small and nimble algorithms decide the future of AI or will it be raw computing power? Nobody can say, and expecting a simple answer would be unreasonable anyway.

Zaharia and Coleman, two of the DAWNBench organizers, say they’re just happy to see the competition provoke such a range of responses. “There was a tremendous amount of diversity,” says Coleman. “I’m not too worried about [one company] taking over the industry just based on what’s happened with deep learning. We’re still at a time where there’s an explosion of frameworks going on [and] a lot of sharing of ideas.”

The pair point out that although it was not a criteria for the competition, the vast majority of entries to DAWNBench were open-sourced. That means their underlying code was posted online, and that anyone can examine it, implement it, and learn from it. That way, they say, whoever wins in DAWNBench’s challenges, everybody benefits.

Update May 7th, 10:30AM ET: Updated to clarify that Google’s entry to the ImageNet competition in DAWNBench was done on a TPU pod, not a single TPU.