Docker Containers gently

The engineering team at Sidekicker recently moved to a fully containerised approach to deploying software. I thought it would be a good idea to write about why we did it, and how it sets us up for the long run.

To fully appreciate containers, here’s a quick crash course in software architecture (feel free to skip if you already know this).

<begin crash course>

Software is built in tiers:

This is powerful because each tier conceals the complexities of the tiers below it. The higher the tier, the more an engineer can focus on what the software needs to achieve, rather than the details of how it will be achieve.

For example, an engineer could instruct the computer with a line of code at the “Application Code” tier:

print "Hello World!"

As the instruction is carried out, it is passed down the tiers and reinterpreted in increasingly complex terms, ultimately resulting in a set of physical pixels being activated on an LCD display.

Nowhere in that line of code does the engineer specify the exact coordinates of the pixels to activate, or what colour they should be. The lower tiers take care of those complications.

However, there is a price to pay for this architecture: in order for even the simplest programs to run, many tiers of software need to interact in exactly the right way. This works fine for the most part, but just like the apps on your smartphone, new improved versions of components in these tiers are continually released.

To further complicate things, these tiers vary depending on the tiers below. For example, while a web application ultimately runs on the Linux operating system on a web server, it is likely worked on and tested on a Windows or MacOS operating system on a laptop.

If you’ve ever heard an engineer say “but it worked on my laptop”, this is why.

So how do containers help this?

<end crash course>

A containerisation system (technical name: Operating-System-Level virtualisation) allows very specific versions of components in tiers above the “Operating System” to be captured in a “container” (the green box).

It then standardises the way a container interacts with the operating system and, consequently, the hardware tiers below. If you’re familiar with the way a standard shipping container easily fits any ship, freight train or trailer truck, that’s where the name comes from.

Deploying with containers reduces the likelihood of something “working on my laptop” but not in “production”. This is because whatever is running “on my laptop” is practically a replica of whatever is running in “production”.

Now, as newer versions of tier components are released, an engineer can make the desired changes within a container, test the changes and capture an exact image of the container to be replicated in “production”.

Because of this consistency, bugs that are observed in production are easily reproduced in an engineer’s local development environment, and fixes or changes can be confidently tested locally and replicated to production without resorting to risky “test in production” exercises.

But this is just the beginning.

Without containers, preparing a new server for use is a time consuming process. An engineer would have to spend many hours building the tiers starting from the Operating System up for each server.

While there are tools that automate this, it was still a complicated, manual and error-prone exercise. Further, because it takes so long to deploy additional capacity this way, the typical approach is to have spare servers on operational stand-by, incurring additional operation cost.

With containers, it takes mere minutes to have a new server up and running because there is minimal manual configuration involved. Whatever necessarily manual configuration is simply captured and tested once inside a container and replicated. This approach vastly reduces the time and effort and risk involved in provisioning additional servers to handle an influx of web traffic.

We’ve been using containers for local development and testing for over a year now, but it is a big engineering milestone for us to have rolled it out in production where our customers and sidekicks can directly benefit from it.