Microservice architectures solve some problems but introduce others. Dividing applications into independent services simplifies development, updates, and scaling. At the same time, it gives you many more moving parts to connect and secure. Managing all of the network services — load balancing, traffic management, authentication and authorization, etc. — can become stupendously complex. 

There is a collective term for this networked space between the services in your Kubernetes cluster: a service mesh. A Google project, Istio, is all about giving you a way to manage your cluster’s service mesh before it turns into a bramble-snarl.

What is a service mesh?

With any group of networked applications, there is a slew of common behaviors that tend to spring up around them. Load balancing, for instance: There are few cases where a group of networked services don’t need that. Likewise, being able to A/B test different combinations of services, or to set up end-to-end authentication across chains of services. These behaviors are collectively referred to as a service mesh.

Managing the service mesh shouldn’t be left to the services themselves. None of them are in a good position to do something that top-down, and it really shouldn’t be their job anyway. Better to have a separate system that sits between the services and the network they talk to. This system would supply two key functions:

  1. Keep the services themselves from having to deal with the nitty-gritty of managing network traffic—load balancing, routing, retries, etc.
  2. Provide a layer of abstraction for admins, making it easy to enact high-level decisions about network traffic in the cluster—policy controls, metrics and logging, service discovery, secure inter-service communications via TLS, and so on.

Istio service mesh components

Istio works as a service mesh by providing two basic pieces of architecture for your cluster, a data plane and a control plane.

The data plane handles network traffic between the services in the mesh. All of this traffic is intercepted and redirected by a network proxying system. In Istio’s case, the proxy is provided by an open source project called A second component in the data plane, , gathers telemetry and statistics from Envoy and the flow of service-to-service traffic.

, a way to prevent a service from being bombarded with requests if the back end reports trouble and can’t fulfill the requests in a timely way. Istio provides a circuit breaker pattern as part of its standard library of policy enforcements.

Finally, while Istio works most directly and deeply with Kubernetes, it is designed to be platform independent. Istio plugs into the same open standards that Kubernetes itself relies on. Istio can also work in a stand-alone fashion on individual systems, or on other orchestration systems such as Mesos and Nomad.

How to get started with Istio

If you already have experience with Kubernetes, a good way to learn Istio is to take a Kubernetes cluster—not one already in production!—and . Then you can that demonstrates common Istio features like and . This should give you some ground-level experience with Istio before deploying it for service-mesh duty on your application cluster.

Red Hat, which has invested in Istio as part of the company’s Kubernetes-powered OpenShift project, offers tutorials that will .