Skip to main content

Nvidia creates a 15B-transistor chip for deep learning

Tesla P100 datacenter chip.
Image Credit: Nvidia

Nvidia chief executive Jen-Hsun Huang announced that the company has created a new chip, the Tesla P100, with 15 billion transistors for deep-learning computing. It’s the biggest chip ever made, Huang said.

Huang made the announcement during his keynote at the GPUTech conference in San Jose, California. He unveiled the chip after he said that deep-learning artificial intelligence chips have already become the company’s fastest-growing business.

“We are changing so many things in one project,” Huang said. “The Tesla P100 has five miracles.”

The DGX-1 supercomputer from Nvidia

Above: The DGX-1 supercomputer from Nvidia

Image Credit: Nvidia

Nvidia previously launched its Tesla M4 and Tesla M40 deep-learning chips, and those chips are selling fast. Now the Tesla P100 is in volume production today, Huang said.

“We decided to go all-in on A.I.,” Huang said. “This is the largest FinFET chip that has ever been done.”

“Nvidia is taking a big risk with the P100 as it is different from anything they’ve ever done before in that it is for the data center first, not for games or a workstation,” said Patrick Moohead, analyst at Moor Insights & Strategy. “It’s also risky as they has multiple new thing happening like a new process (16nm), new architecture (Pascal), new memory architecture (HBM2), and new interconnect (NVLink).”

Moorhead added,”The good news is that Nvidia says it is shipping P100 to the key HPC OEMs (IBM, HPE, Dell, Cray), AI and cognitive cloud players, and key research institutions. If Nvidia can hit the performance claims, their dates and yield effectively, this will be very, very positive for Nvidia in 2H-2016 and 1H-2017.”

The chip has 15 billion transistors, or three times as much as many processors or graphics chips on the market. It takes up 600 square millimeters. The chip can run at 21.2 teraflops. Huang said that several thousand engineers worked on it for years.

“Three years ago, when we went all in, it was a leap of faith,” Huang said. “If we build it, they will come. But if we don’t build it, they won’t come.”

Huang showed a demo from Facebook that used deep learning to train a neural network how to recognize a landscape painting. They then used the network to create its own landscape painting.

He said that deep learning has become a new computing platform, and the company is dealing with hundreds of startups in the space that plan to take advantage of the platform.

“Our strategy is to accelerate deep learning everywhere,” Huang said.

Nvidia has also built a 170-teraflop DGX-1 supercomputer using the Tesla P100 chip.

“This is a beast of a machine, the densest computer ever made,” he said.

Jen-Hsun Huang of Nvidia shows off the company's Tesla M4 and Tesla M40 deep learning chips.

Above: Jen-Hsun Huang of Nvidia shows off the company’s Tesla M4 and Tesla M40 deep learning chips.

Image Credit: Nvidia

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.