Getting started with Azure Remote Rendering

0
159


Microsoft’s mixed reality headset is now shipping, offering improved image resolution and an increased field of view. It’s an interesting device, built on ARM hardware rather than Intel for improved battery life and  using augmented reality .

What HoloLens 2 can do is amazing, but what it can’t do might be the more interesting aspect of the platform and the capabilities that we expect from the edge of the network. We’re used to the high-end graphical capabilities of modern PCs, able to render 3D images on the fly with near-photographic quality. With much of HoloLens’ compute capabilities dedicated to delivering a 3D map of the world around the wearer, there’s not a lot of processing available to generate 3D scenes on the device as they’re needed, especially as they need to be tied to a user’s current viewpoint.

With viewpoints that can be anywhere in the 3D space of an image, we need a way to quickly render and deliver environments to the device. The device can then overlay them on the actual environment, building the expected view and displaying it through HoloLens 2’s MEMS-based (microelectronic machines) holographic lenses as a blended mixed reality.

Rendering in the cloud

One option is to take advantage of cloud-hosted resources to build those renders, using the GPU (graphics processing unit) capabilities available in Azure. Location and orientation data can be delivered to an Azure application, which can then and deliver it to the edge device for display using standard model formats.

LEAVE A REPLY