X
Innovation

Intel's 3-point plan for coming out on top of the AI chip war

Gadi Singer, VP of Intel's Artificial Intelligence Products Group, lays out the chip giant's AI strategy and explains what's driving it.
Written by Stephanie Condon, Senior Writer

Artificial intelligence, one could argue, is in its awkward teen phase. After its emergence in the marketplace, thanks in part to advances in computational power and the availability of rich data, AI solutions are cropping up everywhere.

"Deep learning is moving from its illustrious childhood in 2014 and 2015 to coming of age in 2019 and 2020," Gadi Singer, vice president of Intel's Artificial Intelligence Products Group, said to ZDNet.

gadi-singer.jpg

Gadi Singer, VP of Intel's Artificial Intelligence Products Group

Adam Bacher

Part of the current AI growth spurt includes a wide array of chip solutions, from traditional chipmakers like Intel, to cloud companies like Google and a whole slew of startups. As the market matures over the next few years, it will evolve towards Intel's strengths, Singer argued.

"As the space goes towards more inference, broader deployment, real-life solutions with an emphasis on latency and TCO... all those aspects are really bringing into play Intel's strengths of many years," Singer said.

Singer laid out how Intel views the AI chip market and how it plans to come out on top:

Three changes in AI driving Intel's strategy

Since deep learning began breaking through around 2014, there have been three major trends in the industry, Singer said, that Intel has responded to. First, deep learning is developing a much richer set of capabilities and collecting richer data -- image recognition, for instance, has gone from identifying cats to identifying potentially malignant cells in a 3D image.

Next, organizations are moving from training and proof-of-concept deployments to inference. With this transition comes more focus on TCO, Singer said.

The third trend is the emergence of deep learning frameworks. Now-dominant frameworks like Caffe and TensorFlow either didn't exist or were in very early stages just a few years ago. This prompted a shift from proprietary solutions to high-scale aggregated solutions, Singer said.

Part 1: Building up new talent and technologies

In anticipation of more mature deep learning deployments, Intel has done several things, starting with bringing in more talent and new technologies. In the past few years, Intel has acquired Nervana, Movidius and Mobileye, all following its investment in Altera.

With these acquisitions under its belt, Intel is focusing on building up both hardware and software, Singer said. On the hardware front, that means continuing to "boost Xeon as the bedrock of AI," he said.

Boosting Xeon has also had a lot to do with software, Singer noted. "There was an effort over the last couple years to optimize the software stack and take advantage of the inherent parallelism within Xeon hardware," he said.

Making the case for Xeon:

Even with the introduction of GPUs and specialized accelerators, Xeon will play a "primary role," Singer said.

For one thing, it's simply more efficient for cloud service providers and enterprises looking at overall TCO. Additionally, he said, Intel has enhanced Xeon with AVX and other extensions to help with concurrent workloads. The company continues to enhance the chip with machine learning capabilities that will be "particularly effective for blended workloads" that combine neural network compute and more traditional compute.

As an example, Singer pointed to the "attention" function in machine translation that refers back to a larger data set when translating a more targeted data set. The function, Singer said, effectively mimics "when you're trying to understand an image or word in the context of what's already in your brain."

"This part of the compute can run very well on the general purpose of the CPU," he said.

Overall, Singer said, Xeon is competitive "because of its flexibility and because of the blending of some of the real-life workloads going forward."

Part 2: Creating a diverse product line

Intel is building an "array of solutions for very different needs," Singer said. "Sometimes it's performance, sometimes it's power efficiency, sometimes it's latency."

In addition to Xeon, that includes Movidius VPUs and FPGAs, which are powering Microsoft Azure. The first commercial Nervana processors will be out next year.

What about specialized chips produced by cloud companies?

Some of Intel's big customers are designing their own chips for their own needs -- Google, for instance, has the TPU. This development makes sense: After all, who would know better about Google's AI processing needs than Google itself?

Singer countered that Intel's product portfolio addresses cloud providers' current compute needs, as well as their emerging needs. Developing the best solutions, he said, is "about being able to understand compute trends and build leadership solutions. We believe we can do that by working with cloud service providers and the industry at large."

Furthermore, Intel can provide tight system integration, Singer said. "That's something that brings a lot of value to the cloud service providers, and they will look at how their internal solutions match up with the products and solutions we provide. This will be worked out over the next few years."

Part 3: Systems integration

System integration is the third part of Intel's strategy to winning the AI chip war.

"Most tasks, when you say there's a GPU or an accelerator that does the task, there's actually a workload that requires a combination... to solve it," Singer explained. Intel, he said, is focused on "how to create the best system solutions because that is what will eventually scale, will provide real-life solutions to customers."

Editorial standards