Tools such as Kubernetes go a long way to simplifying the process of building distributed applications at scale. But they’re only part of the story, offering ways to replicate containerized microservices between host systems. If we’re to get the benefits of an abstracted data-center-level operating system, then we need to consider how we manage networking at scale, especially when that scale is the size of a massive public cloud like Azure.

One answer to the problem is . Best thought of as an abstraction of the control plane of a software-defined network, a service mesh is a way of building a software layer that manages the interprocess communication necessary to support your code.

A well-designed service mesh is a lot more than a simple networking layer. It supports most of the functions you find in network appliances, like load-balancing and encryption, as well as tools for supporting modern systems management models, like observability. There’s support for critical distributed application functions, allowing service discovery so that different application tiers can scale at different rates while still allowing the application to function.

By implementing all these features in sidecar modules, service meshes provide many of the missing features needed to deliver applications on Kubernetes and across clouds. You can build distributed applications without a service mesh, but you’ll be reinventing the wheel. With Azure focusing much of its distributed application development model around Kubernetes, there’s an increasing need for service meshes in Microsoft’s cloud.