Containers, Docker, and Kubernetes Part 1

Looking at containerized infrastructure

Shipping containers at Clyde by Steven Gibson is licensed under CC BY 2.0

As a long-time operations engineer I value simplicity and reproducability above all else on my servers. My first and most important rule of server management is “Never manually modify the server”. All servers must be provisioned and configured via a tool designed to keep servers at a known state. My tool of choice for many years has been Chef. Other similar tools include Ansible, Salt, and Puppet.

Chef serves us well at Collective Idea in managing many internal and client projects. But I’ve come to realize that all tools in this space share one unfortunate flaw: they don’t deal well with change. Production applications and infrastructure are complex creatures with many moving parts and often numerous implicit and explicit dependencies, all of which can change at any time, for any reason. Now, some aspects of operations are easy to change, such as application configuration or tweaking system settings. Others are more difficult, like upgrading the version of Ruby your app is running without incurring downtime. And there are some changes, like rebooting a server to apply a new kernel, that still require manual intervention.

In short, I’ve found with my many years of using Chef that it’s fun to get a new environment configured exactly how you need it but over time dealing with changing requirements gets tedius and error prone. Is there a better design that reduces the pain of making changes?

Immutable Infrastructure

One solution to these problems is to follow a pattern known as “immutable infrastructure”. Instead of in-place updates to servers, you trash the entire server and replace it with an up-to-date version. At its core, immutable infrastructure still requires a provisioning tool like Chef to configure the initial state of the server, but once configured these servers should never be touched again. Designing for immutable infrastructure, though, has its own complications. Immutable infrastructure requires designing around the possibility for any server to disappear, and any server type to appear, at any time. How do you support upgrading your database? How do web servers register themselves with the load balancer? How do you replace a load balancer without causing downtime?

In practice, using immutable servers can bring about some technical limitations that make the concept difficult to apply. While tools like Packer exist to help with and simplify creating machine images, you’re still dealing with entire machine images. This can lead to long build and deploy times as you’re often moving around hundreds of megabytes to gigabytes of data for a single machine.

What if we were able to take the idea of immutable infrastructure, but at a much smaller scale? What if we could build an image of just the code we need, ignore any operating system, and ignore the underlying server? Then we could deploy very small images, move around a fraction of the data required, and significantly reduce the time required as well. This is exactly what’s now possible through containerization and related tooling like Docker and rkt (pronounced “rocket”). With a container-based infrastructure, the underlying servers and virtual machines become nothing more than providers of computing resources (e.g. CPU and RAM).

For the sake of terminology and searchability, and because it is the tool I use, I will be discussing Docker. However, any other containerization tool, like rkt, can apply here just as well.

Docker

Docker, at its simplest, combines files and commands into a single package (the “image”) and then asks the server or virtual machine to execute the image (the “container”). The container is run in a locked-down context, restricting access to only what the container explicitly requests. As far as the container is concerned, it is the only software running on the server. This restriction allows a server to run many containers together with none the wiser of what else is running.

Docker images are write-once, making a Docker infrastructure an immutable infrastructure. Images and containers are never updated. Instead, new images are built and started, and old containers are shut down in far less time than it takes to spin up and spin down full servers and virtual machines.

So have we moved everything to Docker here at Collective Idea? Well, no, not yet. One of the tenets of containerization is to follow the Single Responsibility Principle: prefer multiple containers that each do one thing well. Effectively, moving to a containerized infrastructure means moving to a Service Oriented Architecture (SOA), bringing about many of the same complexities and questions, including:

  • How do containers find each other and communicate?
  • How do you manage where containers are run, how many, etc?
  • How do you gather logs and stats of running containers?
  • How do you deploy new images?
  • What happens when a container crashes?
  • How do you expose only certain containers to the Internet / Intranet?

Until recently, the effort of designing for these cases and building the infrastructure to handle a full dockerized setup has been too prohibitive for us at Collective Idea versus staying with our existing and well-known Chef setup. Now, however, new tools have appeared explicitly designed around answering these questions and providing turn-key containerized infrastructure. Docker itself has Docker Compose, but the tool we have chosen at this time comes out of Google. It’s called Kubernetes and is infused with over a decade of experience running billions of containers in Google’s massive data centers.

In Part 2 of this series I go over what’s special about Kubernetes and why it is our preferred solution for production Docker deployments.


To skip around to other parts of this blog series, use the links below.

Part 2 - What is Kubernetes and how does it make containerized infrastructure easy?

Part 3 - How to get started with a Kubernetes configuration

Photo of Jason Roelofs

Jason is a senior developer who has worked in the front-end, back-end, and everything in between. He has a deep understanding of all things code and can craft solutions for any problem. Jason leads development of our hosted CMS, Harmony.

Comments