Linux containers have taken infrastructure computing by storm. What started with Linux VServer, OpenVZ, cgroups, and LXC has gained momentum with the rise of Docker. You can think of a container as a lightweight VM that virtualizes the Linux kernel instead of hardware. This up-leveling of the virtualization brings in better performance, high density, and portability across clouds, whether public, private, or hybrid.
Containers are like VMs in many respects but are typically used for running individual daemons or services rather than multiple services or monolithic applications. Containers make creating and upgrading applications easy, but also introduce complexity via the increased number of instances and interdependencies to manage. This complexity gave rise to container orchestration systems like Mesos, Docker Swarm, and Kubernetes.
An open source container orchestration system born out of Google’s internal infrastructure and experience running containers, Kubernetes is now part of the Cloud Native Computing Foundation (CNCF). Kubernetes provides provisioning, scaling, upgrade/rollback, and service discovery capabilities out of the box, which make it ideal for deploying cloud-native applications.
Kubernetes differs from other container orchestration systems in a number of ways, including its approach to high availability, load balancing, and autoscaling, to name a few of its most popular features. (See the Platform9 blog for brief comparisons to and .)
, which provides a local development environment for MacOS, Linux, and Windows. Other tools to consider include , , , and .
While you can install and run Kubernetes on a development machine using Minikube or choose other tools to deploy it to a public cloud or in a private datacenter, these options don’t address the following concerns:
- Upgrading: Kubernetes is going through a rapid development process with features and bug fixes in every release. Upgrading Kubernetes with new patches can be a time-consuming process, taxing an already constrained devops team.
- Monitoring: Kubernetes is a set of services with dependencies on other components like Docker, Flannel/Calico, storage systems, and OS services. Continuous monitoring to ensure the uptime of the Kubernetes services is critical to maintaining application uptime.
- Support: Kubernetes is great, but like any software project it has both known and unknown bugs. Who will patch and support it?
Kubernetes as a service
For teams that do not have the time and resources to invest in building a Kubernetes infrastructure, a managed Kubernetes service such as represents a low-overhead option.
Platform9 Managed Kubernetes is provided as a service on both private and public cloud infrastructures. It allows IT and development organizations to consume the Kubernetes service without the burden of managing it—much like Amazon’s EC2 or RDS and other “as a service” offerings popular among enterprise IT shops.
and log in using the credentials provided in the support email. After logging in, select the option to create a cluster on a public cloud provider or a private datacenter. For a public cloud provider, deployment can be done in a few simple steps.
Configure the cloud provider
- Choose the public cloud provider (AWS in this example) and give the provider a name.
- Add your AWS Access ID and AWS secret key. If you need to look it up, go to Roles > User > IAM in your AWS console.
After configuring the cloud provider, it’s time to create a cluster.
Create your cluster
- Select the cloud provider created above.
- Select autodeploy cluster (only available for public cloud).
- Select the AWS region where you want your cluster to be created.
Configure your cluster
- Select the availability zones. Remember to pick more than one for high availability.
- Select the operating system of the nodes to create.
- Select the number of worker nodes to create.
- Add or create the SSH key.
Provide network information
- Select the base domain to be used by API and services domains (FQDNs).
- Pick up a VPC or create a new one.
- Confirm or edit API and service domains (FQDNs).
- Confirm or change container and services IP addresses (CIDRs) to make sure they do not conflict with the network configuration.
Review your entries and click Finish. Note that cluster creation generally takes a few minutes, though the time will vary depending upon the number of workers and availability zones.
Kubernetes test-drive: On-premises deployment
Platform9 Managed Kubernetes provides similar capabilities in terms of cluster creation for an on-premises installation. The procedure is slightly different and involves a few more steps.
Install the Platform9 agent
- Login to the Platform9 portal.
- You will be prompted to add a node.
- Download the appropriate agent. Platform9 supports the Red Hat (Fedora, Scientific Linux, CentOS) and Debian (Ubuntu) families of Linux systems.
- Use the simple CLI-based installer to install the Platform9 agent on the servers where you want to install Kubernetes.
Prepare added servers
- Seconds after the installation, a prompt will initiate the process to assign a role to these newly added hosts.
- Authorize your servers.
Create your cluster
Now that you have prepared the hosts, it is time to create your Kubernetes cluster.
- Click on Create Cluster and select Manual Deploy.
- Specify the CIDR for services and pods. These IP addresses will be internal to the Kubernetes cluster, but it’s important to note that this range should not conflict with the existing infrastructure. Such conflicts can cause issues that are difficult to debug.
- Click Finish. During the cluster creation process, the necessary images and configuration will be applied on the nodes and the servers will go into “converging” state. In a few minutes, the installation and configuration will be completed.
Congratulations! Now you can start working on your Kubernetes cluster. Once the cluster is up and running, you can go with the community dashboard to deploy containers and use the built-in WebCLI to manage the cluster. Eventually, you’ll want to download the kubeconfig file to your local machine and choose the kubectl CLI for further operations.
As the need for container orchestration systems continues to increase, organizations should look to Kubernetes for its many benefits. With Kubernetes-as-a-service offerings, teams can bypass steep learning curves by abstracting away Kubernetes management and maintenance duties. Why spend valuable time and resources building and maintaining the infrastructure? Those resources are much better spent on building and managing your containers and applications.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to .