Over the past couple of years, every major cloud vendor added GPU resources as standard issue: , , , . In most every case, those GPUs come from Nvidia, the premier supplier of GPUs for high-performance computing and the de facto software standards for it.

Yesterday, Nvidia unveiled the Nvidia GPU Cloud, a new plan for leveraging GPUs wherever they may reside. The details are still vague, but it’s clearly been built to aid developing the apps now most closely associated with GPU acceleration: machine learning and artificial intelligence.

A guided deep learning experience

The come mostly from the company’s press release. The phrasing throughout indicates that GPU Cloud amounts to an end-to-end software stack for deep learning. It features many common frameworks for GPU-accelerated machine learning—Caffe, Caffe2, CNTK, MXNet, TensorFlow, Theano and Torch—along with Nvidia-specific tools for deep learning, including support for running the above in Docker containers.

There’s been a growing need in for applications that provide complete workflows for machine learning so data ingestion, normalization, model training, and prediction generation are all handled through a single consistent pipeline. Nvidia GPU Cloud sounds like at least partial fulfillment of that ideal, “a simple application that guides people through deep learning workflow projects across all system types,” as the company puts it in its press notes.

. Those who shell out the $149,000 for that Tesla V100-powered behemoth won’t have to cobble together their own software stack to make it more than a 960-teraflop space heater. Instead, they can use GPU Cloud as a sort of deep learning OS for the device.

Nvidia also hints at how GPU Cloud can run in the cloud and use GPU resources there. No details have been provided yet about cloud integration, but but it seems inevitable that would include any of the major commodity clouds with GPU support. Such clouds could be used for native data storage as well as GPUs, so data could be gathered, stored, and processed all in the same domain.

It’s not clear what varieties of elasticity will be possible with GPU Cloud, but it looks like horizontal scaling will possible with the same kind of hardware. “Users can start with a single GPU on a PC and add more compute resources on demand with a DGX system or through the cloud,” according to the press release. What remains to be seen is how readily local resources can be scaled out to a cloud or vice versa.

, but in configurations that competitors would have a hard time matching.