Containers are fast becoming the unit of packaging and deployment for enterprise applications. Many in IT still see containers as merely the next step in the logical progression that began with the move physical servers to virtual machines, bringing with it another order-of-magnitude increase in compute density relative to the number of VMs that can run on a physical server.
While this approach recognizes that containers represent another explosion in the number of things IT needs to manage, it misses the most important change brought about by the container ecosystem—namely the fundamental shift in the software delivery workflow that containers enable.
In the traditional software delivery workflow, two separate teams are responsible for different layers of the stack: Operations teams own the operating system image, and development teams own the application artifacts. In this workflow, application artifacts and their dependencies are delivered from development to operations using the OS packaging constructs (RPMs, MSIs, and so on). The ops team then deploys those artifacts on “blessed” OS images that meet the organization’s policies and include additional monitoring and logging software; and the composite image is then run in production. Dev evolves the application by handing new packages to ops, and ops deploys those updates, as well as any other updates (such as patches that address operating system vulnerabilities) using scripts or configuration management software.
Container-based software delivery is different
The container delivery workflow is fundamentally different. Dev and ops collaborate to create a single container image, composed of different layers. These layers start with the OS, then add dependencies (each in its own layer), and finally the application artifacts. More important, container images are treated by the software delivery process as immutable images: any change to the underlying software requires a rebuild of the entire container image. Container technology, and Docker images, have made this far more practical than earlier approaches such as VM image construction by using union file systems to compose a base OS image with the applications and its dependencies; changes to each layer only require rebuilding that layer. This makes each container image rebuild far cheaper than recreating a full VM image. In addition, well-architected containers only run one foreground process, which dovetails well with the practice of decomposing an application into well-factored pieces, often referred to as microservices. Therefore, container images are far smaller and easier to rebuild than typical OS images, and therefore take much less time to deploy and boot.