When it comes to building distributed applications at scale, containers have become the logical deployment tool. They let you wrap up code at a service level, keeping your application separate from its data. Once deployed, orchestration tools such as manage scaling, monitoring CPU and memory usage, and deploying new container instances as necessary.

At heart it’s a relatively simple way to think about your code. In practice, however, there’s a lot of configuration work to be done: understanding how code is partitioned, defining the correct logical groupings of containers and services, building the links between your container deployment and external storage, and making sure it’s all handled correctly. Much of that is, of course, controlled by Kubernetes, but that means getting deep into its configuration and creating the appropriate YAML configuration files.

Introducing AKS

With all that YAML, running Kubernetes for yourself isn’t particularly easy. There’s a lot to consider when designing pods and rules and setting up higher-level monitoring. You need to define nodes, set up masters, and manage the entirety of your distributed infrastructure. That’s , comes in. It’s a way of handing over much of your Kubernetes management to Azure. All you need to do is define your nodes and AKS does the rest. You pay for the compute resources your code uses and you don’t have to worry about running masters, only the agent nodes.