Most major cloud vendors have, or are preparing, support for GPU-enabled compute instances. But IBM hopes its next step will keep it ahead of the pack—or at least abreast with it.
IBM plans to make available instances of Nvidia’s current-generation GPU, the Tesla P100, via its Bluemix cloud service. But initially the instances will only be available on bare-metal machines, not via more malleable VM types as offered by some of the competition.
Tesla on metal
The Tesla P100 is regarded as the leader of Nvidia’s GPU pack. It uses the Pascal GPU architecture, which is not only speedier overall than the previous generation of 2012 Kepler-powered processors, but includes new types of GPU instructions to accelerate certain calculations. Software that takes advantage of the Pascal instruction set, like the Torch deep learning framework, runs even faster.
Other clouds offer Nvidia GPUs as well, although many only offer previous-generation silicon. Google Cloud Platform, for instance, plans to offer Tesla P100s but currently only offers the Kepler-powered Tesla K80 line. Microsoft Azure’s GPU-powered offerings also use the K80, but GPU instances in general are currently available only as a technology preview. And Amazon Web Services’ highest-end GPU-powered instance type, the P2, uses the K80 as well.
, a line of standalone servers that employ the Tesla P100 and are built mainly for running AI applications.
It’s not clear when IBM will provide support for the P100 outside of bare metal instances, but there’s a growing list of reasons to do so. For one, cloud are becoming increasingly synonymous with container-powered workloads, and the latest container technologies are better equipped to run work on GPUs. Docker has an to allow that, and Google’s container-orchestration project Kubernetes for managing GPU-powered workloads last year.
, so the next logical step seems to be further enabling top-of-the-line GPU support for workloads running somewhere other than bare metal. Especially since Google already has a leg up on IBM.