Nvidia is connecting the dots within its data centers with the debut of a new platform designed to tackle machine learning more efficiently.
The first is to encourage developers to build deep learning networks and smart apps rooted in artificial intelligence techniques. That leads into the second objective, which is offering new accelerators for deploying deep neural networks across data centers.
Suggesting opportunities abound from cars to healthcare, Nvidia CEO and co-founder Jen-Hsun Huang went so far as to describe machine learning as “the grand computational challenge of our generation.”
“The artificial intelligence race is on,” said Huang wrote in Tuesday’s announcement.”Machine learning is unquestionably one of the most important developments in computing today, on the scale of the PC, the internet and cloud computing.”
Nvidia now has two accellerator options on the table for these tasks: the higher-end Tesla M40 GPU for deploying deep neural networks and the lower-power M4, a smaller form-factor crafted for machine learning but also routine streaming image and video processing.
Nvidia is also putting together a suite of tools curated for developers and data center managers, most of which rely on GPU-powered technologies for processing, resizing and transcoding images and video.
Huang already snuck in a few hints about the graphics chip maker’s long-term plans on the company’s solid third quarter earnings report published last week. “Virtual reality, deep learning, cloud computing and autonomous driving are developing with incredible speed, and we are playing an important role in all of them,” Huang wrote, in prepared remarks. The Silicon Valley giant has been acting forcefully on its enterprise data center and machine learning ambitions frequently over the last year.
Last November, Nvidia introduced a super-charged Tesla GPU accelerator, touted to be the fastest to date upon release as well as more capable of handling complex analytics and scientific computing. This spring, Huang unveiled several new technologies for advancing deep learning amid the GPU Technology Conference. Following up the Titan X platform for mobile gaming, the Pascal GPU series arrived with the promise to speed up deep learning applications tenfold compared to Nvidia’s previous Maxwell processors.
In August, Nvidia bolstered its Grid platform for virtual desktops and applications with the debut of version 2.0, promising both the delivery of the most graphics-intensive apps ever as well as double both the performance and user density than its predecessor, now allowing up 128 users per server.
Nvidia also tapped Microsoft Azure as the first cloud-services provider to offer Grid 2.0 capabilities and accelerated computing. The Tesla M40 GPU accelerator and Hyperscale Suite software are scheduled to roll out before the end of 2015 while the M4 GPU is pegged to ship during the first quarter of 2016.