Nvidia has unveiled several updates to its deep-learning computing platform, including a powerful GPU and supercomputer
- Upgrades to the Tesla V100 data center GPU platform
- Software improvements that boost performance of GPU-accelerated deep learning-based inference
- A new partnership with ARM that will integrate Nvidia Deep Learning Accelerator technology into ARM-based IoT chips
In 2008 researchers started using GPUs made by Nvidia and AMD to handle work typically performed by microprocessors. The parallel computing processors built into graphics cards made by these two companies offered distinct advantages over the X86 platform championed by Intel.
At this year’s GPU Technology Conference (GTC) in San Jose, Nvidia CEO Jensen Huang announced significant improvements to Nvidia’s GPU hardware and platform performance and unveiled the DGX-2, a new computer for researchers who are “pushing the outer limits of deep-learning research and computing” to train artificial intelligence.
The computer, which is based on the Volta architecture, with CUDA Tensor cores, will ship later this year and is the world’s first system to sport a two petaflops of performance.
DGX-2 introduces NVIDIA’s new NVSwitch, enabling 300 GB/s chip-to-chip communication at 12 times the speed of PCIe. This, with NVLink2, enables sixteen GPUs to be grouped together in a single system, for a total bandwidth going beyond 14 TB/s.
Nvidia is soaring right now, and there’s a lot to celebrate. Its stock is hovering around its all-time high of $242.
Operators of distributed ledgers and cryptocurrency mining efforts favour the parallel processing found in GPUs and Nvidia’s products are among the most popular in this market.
This popularity has created a significant shortage of graphics cards resulting in developers and gamers often being unable to purchase the latest product from Nvidia.
Nvidia also announced it will be bringing its open source Deep Learning Architecture (NVDLA) over to ARM’s upcoming Project Trillium platform, which is focused on mobile artificial intelligence (AI). Specifically, NVDLA will help developers by accelerating inferencing, the processing of using trained neural networks to perform specific tasks.
The collaboration will make it simple for IoT chip companies to integrate AI into their designs and help put intelligent, affordable products into the hands of billions of consumers worldwide.
While it’s a surprising move for NVIDIA, which typically relies on its own closed platforms, it makes a lot of sense. Nvidia already relies on ARM designs for its Jetson and Tegra systems. If it’s going to make any sort of impact on the mobile and IoT world, it needs to work together with ARM, who dominates those arenas. And ARM could use Nvidia’s technology to prove just how capable its upcoming chip platform will be.