Nvidia is extending its solution footprint far beyond artificial intelligence (AI) and gaming, venturing broadly across the entire computing ecosystem into mobility and the next-generation cloud data center.
Nvidia’s ambitions in this regard are clear from its of Arm Technology and from CEO Jensen Huang’s positioning of the company as a “” provider. Demonstrating that he’s putting substantial R&D dollars behind this vision, at the virtual this month, Huang the rollout of the company’s new BlueField “ (DPU)” chip architecture.
Accelerating diverse workloads through programmable CPU offload
Strategically, the BlueField DPU builds on two of Nvidia’s boldest recent acquisitions. The new hardware architecture runs on Arm’s CPU architecture. It also incorporates high-speed interconnect that Nvidia acquired recently with Mellanox.
Marking the company’s evolution beyond a GPU-centric product architecture, Nvidia’s new DPU architecture is a high-performance, multicore SoC (system on chip). BlueField DPUs incorporate software-programmable data-processing engines that can accelerate a wide range of AI, networking, acceleration, virtualization, security, storage, and other enterprise workloads.
As the foundation of server-based , DPUs offload workloads from CPUs while efficiently parsing, processing, and transferring high volumes of data at line speeds. In addition to their CPU-offload acceleration benefits, Nvidia’s DPUs can strengthen data center security because the Arm cores embedded within them provide an added level of isolation between security services and CPU-executed applications.
Announced at this latest GTC were the of this new DPU SoC family:
Currently available to only, the DOCA SDK enables developers to program applications on BlueField-accelerated data center infrastructure services. Developers can offload CPU workloads to BlueField DPUs. Consequently, this new offering builds out Nvidia’s enterprise developer tools, complementing the programming model that enables development of GPU-accelerated applications. In addition, the SDK is fully integrated into the catalog of containerized software, thereby encouraging third-party application providers to develop, certify, and distribute DPU-accelerated applications.
Several leading software vendors (, , , and ) announced plans at GTC to integrate their wares with the new DSP/DOCA acceleration architecture in the coming year. In addition, Nvidia announced that several leading server manufacturers, including , , , Fujitsu, , H3C, , , , and , plan to integrate the DPU into their respective products in the same timeframe.
experience, it would not be surprising if, in coming years, more of the mobile experience on this and other Office apps were accelerated locally by leveraging DPU-offload technology .
Enabling Nvidia solutions to lessen their dependency on GPU-centric functionality
Nvidia’s product teams are wasting no time to incorporate the DPUs’ CPU-offload acceleration into their solutions. Most notably, Huang that the is evolving to combine the on a single PCIe card.
Although there was no specific BlueField DPU tie-in to , the company’s Arm-based SoC for AI robotics, one should expect that the DOCA SDK will advance to support development of these applications, which are a hot growth field for Nvidia’s core platforms. It’s also a safe bet that the company will use its new hardware and SDK to accelerate its platform for collaborative 3-D content production, its platform for conversational AI, and its new platform for cloud-native, AI-accelerated video streaming.
Nvidia’s new BlueField DPU architecture and DOCA SDK provide a strategic platform for broadening its reach into enterprise, service provider, and consumer opportunities of all types.
By enabling hardware-accelerated CPU-offload of diverse workloads, the DPU architecture provides Nvidia with a clear path for converging the new DOCA programming models with its AI development framework and catalog of containerized cloud solutions. This will enable the company to provide both its own product teams and solution partners with the hardware and software platforms needed to accelerate a full range of application and infrastructure workloads from cloud to edge.
As it awaits the eventual approval of its proposed acquisition of Arm Technology, Nvidia will need to prove this new architecture to its existing partner ecosystem. If DPU technology falls short of Nvidia’s aggressive performance promises, that deficiency could sour relations with Arm’s vast array of licensees, all of whom rely heavily on its CPU-based processor architecture and would benefit from more seamless integration with Nvidia’s market-leading AI technology.
Clearly Nvidia cannot afford to lose momentum in the cloud-to-edge microprocessor wars just when it has begun to pull away from archrival and CPU-powerhouse Intel.
Copyright © 2020 IDG Communications, Inc.