Nvidia's graphics brawn powers supercomputing brains
Nvidia, trying to move its graphics chips into the supercomputing
market, has found a niche helping engineers build brain-like systems
called neural networks.
For years, the company has advocated the idea of offloading processing tasks from general-purposes central processing units (CPUs) to its own graphics processing units (GPUs). That approach has won over some researchers and companies involved with neural networks, which reproduce some of the electrical behavior of real-world nerve cells inside a computer.
Neurons in the real world work by sending electrical signals around the brain, but much of the actual functioning of the brain remains a mystery. Neural networks in computers, somewhat perversely, emulate this mysteriousness. Instead of running explicit programming instructions to perform a particular job, they're "trained" by handling source data that creates communication patterns among many nodes in the neural network. The trained neural network then can be used to recognize patterns -- or cat pictures like one Google research example that's now commercialized as part of Google+ photos.
One Nvidia customer is Nuance, which uses neural networks to develop speech recognition systems that ultimately end up in places like cars or tech support phone lines. "We have been working with GPUs for over four years, but the recent models -- specifically the 'Kepler' line from Nvidia are providing the most substantial benefits," said Nuance's Chief Technology Officer Vlad Sejnoha in a statement. "We use a large-scale computing grid composed of a mixture of CPUs and GPUs, and are achieving an order of magnitude speedup over pure CPU-based baselines."
Neural network experts at Stanford University -- including Andrew Ng, who's worked on neural networks at Google -- have been working on marrying GPUs to neural networks. In a paper (PDF) for the International Conference on Machine Learning, they describe their work to get around thorny issues of getting the right data to the right GPU.
"Attempting to build large clusters of GPUs is difficult due to communications bottlenecks," they wrote in the paper, but the researchers' approach "might reasonably be packaged into optimized software libraries" to help others with the problem.
High-performance computing is in the news with the International Supercomputing Conference in Leipzig, Germany, this week.
GPUs are particularly well suited to doing large numbers of calculations that can take place in parallel. CPUs such as Intel's Core line are generally designed for tasks that run sequentially instead of being split into independent chunks, though multicore models of the last decade are increasingly parallel.
Still, general-purpose CPUs are not as parallel as GPUs, and Nvidia has made inroads into the Top500 list of the fastest supercomputers, with GPUs giving 39 machines a processing boost.
For years, the company has advocated the idea of offloading processing tasks from general-purposes central processing units (CPUs) to its own graphics processing units (GPUs). That approach has won over some researchers and companies involved with neural networks, which reproduce some of the electrical behavior of real-world nerve cells inside a computer.
Neurons in the real world work by sending electrical signals around the brain, but much of the actual functioning of the brain remains a mystery. Neural networks in computers, somewhat perversely, emulate this mysteriousness. Instead of running explicit programming instructions to perform a particular job, they're "trained" by handling source data that creates communication patterns among many nodes in the neural network. The trained neural network then can be used to recognize patterns -- or cat pictures like one Google research example that's now commercialized as part of Google+ photos.
One Nvidia customer is Nuance, which uses neural networks to develop speech recognition systems that ultimately end up in places like cars or tech support phone lines. "We have been working with GPUs for over four years, but the recent models -- specifically the 'Kepler' line from Nvidia are providing the most substantial benefits," said Nuance's Chief Technology Officer Vlad Sejnoha in a statement. "We use a large-scale computing grid composed of a mixture of CPUs and GPUs, and are achieving an order of magnitude speedup over pure CPU-based baselines."
Neural network experts at Stanford University -- including Andrew Ng, who's worked on neural networks at Google -- have been working on marrying GPUs to neural networks. In a paper (PDF) for the International Conference on Machine Learning, they describe their work to get around thorny issues of getting the right data to the right GPU.
"Attempting to build large clusters of GPUs is difficult due to communications bottlenecks," they wrote in the paper, but the researchers' approach "might reasonably be packaged into optimized software libraries" to help others with the problem.
High-performance computing is in the news with the International Supercomputing Conference in Leipzig, Germany, this week.
GPUs are particularly well suited to doing large numbers of calculations that can take place in parallel. CPUs such as Intel's Core line are generally designed for tasks that run sequentially instead of being split into independent chunks, though multicore models of the last decade are increasingly parallel.
Still, general-purpose CPUs are not as parallel as GPUs, and Nvidia has made inroads into the Top500 list of the fastest supercomputers, with GPUs giving 39 machines a processing boost.
Thanks for your great information, the contents are quite interesting. Nvidia & Nuance Together For Radiology Using AI
ReplyDelete