Tuesday, July 19, 2016

Nvidia’s $2 Billion Computer Chip May Make AI Much More Powerful

 
ADS  
   


At their annual GPU Technology Conference this week, Nvidia unveiled a powerful and expensive new computer chip, which CEO Jen-Hsun Huang says can perform deep learning neural network tasks 12 times faster than their previous best product. To power this speed, the Tesla P100 is packed with fifteen billion transistors – about three times more than their previous top-end chip. All this progress came at a cost of $2 billion in research and development.
Nvidia is a powerhouse in artificial intelligence hardware.

The Santa Clara-based company’s high-powered graphics processing units (GPUs) have helped drive the technology over the past few years, leading some analysts to ask, “Where are all the chip firms not named Nvidia?” With it’s proven success, deep learning – the branch of machine learning that enables systems to identify complex patterns in massive amounts of data by passing information through additional layers of simulated neurons – is where the company has concentrated their efforts.

Chips serve as the physical “brain” for AI software, which tends to be limited by hardware’s computing power. The more complex a system task, the more robust its hardware must be. P100 may offer software companies unprecedented progress thanks to a significant boost in available power. Researchers at Facebook, Microsoft, and Baidu – whom Nvidia has granted early access to P100 technology – can expect to see the capabilities of their AI systems accelerate as tappable computing power multiplies.
Along with their new chip, Nvidia showcased what Huang called “one beast of a machine” – the DGX-1, a supercomputer equipped with eight P100 chips, preinstalled with deep-learning software. 

The company calls the device “the world’s first deep learning supercomputer…engineered top to bottom deep learning…” Though the supercomputer comes with a $129,000 price tag, Nvidia hopes to promote their product to research-based organizations and institutions by giving them to teams at University of California, Berkley, Stanford, New York University, and MIT for trial runs.

Why is Nvidia so focused on deep-learning hardware?
Over the past five years, neural network deep learning has proven to be one of the most successful machine learning methods. AlphaGo defeated world class Go players using deep learning algorithms. Powered by tremendous neural networks, Microsoft’s computer vision software has competed with – and sometime even exceeded – humans’ innate ability to recognize objects. Meanwhile, a deep-learning system developed by Baidu was as good as humans at recognizing both English and Mandarin speech, though natural language processing is one of the next great frontiers.
But deep learning demands power. 
As a leader in GPU manufacturing, Nvidia has a head start on other manufacturers. It also has the capital needed to devote to the development of hardware as robust as the P100. As companies like Google, Facebook, Microsoft, and Baidu continue to take deep learning even deeper, Nvidia aims to position itself as the go-to provider of the powerful hardware to help drive the next innovations.

credit: Nvidia

0 comments:

Post a Comment

Blog Archive