Introduction to Linux Container – Updated 2014-08-20

For the last 7 years my main areas of Systems Engineering and Administration had been the Para-Virtualization route used by VMWare (starting way back with ESX2.5), and a fleeting exposure to Hyper-V which I am now fully cured against.

The closest I had been to a “container like” environment was years of building chrooted environments for Bind and httpd. So I had not had much exposure to Linux Containers until I started looking into a better way to host Linux environments that were faster, reliable, packed more densely and cost nothing (or very low cost compared to ESX licensing). That’s when I discovered Containers and in particular Open VZ, LXC and Virtuoso after I performed an analysis of whats available in the market place.

So what are containers?

In a nutshell, a Linux container is a copy of a Linux environment located in a file system which is “like” a chroot or Jail environment but uses Linux NameSpaces, it runs its own init process, separate process space, separate filesystem and separate network stack which is virtualized by the root OS running on the hardware. Containers are also available for Windows Servers (2003, 2008 and 2008R2 are commonly the host OS version used).


Stolen from Linux Journal

Containers differ significantly from an VMWare ESXi environment which virtualizes both the hardware and OS platforms, but ESXi allows you to run totally different environments. In a container environment you run a copy of either the underlying windows environment or a Linux Environment, but both cannot exist on the same host. Under Linux, you can run different flavours of Linux, like ubuntu and centos but they must run the same kernel version.

As a result of this, there is significant speed increase over ESXi and a big plus is that you can reconfigure both the scheduling of CPU’s, disk and Memory of the running system without it stopping or glitching. In ESXi you need to shutdown the VM to add more CPU and Memory, not in a container! Compared to a virtual machine, the overhead of a container is low. They have fast startup (seconds) as there is no boot process and could conceivably launch on-demand as requests come in, resulting in zero idle memory and CPU overhead.  A container running systemd could have as little as 5MB of system memory overhead and nearly zero CPU consumption. Provisioning new containers can happen in seconds.

Inside a Container

When you SSH into a container you see a fully functional, complete Linux environment, with only the process space for the container showing in a “ps” command. If you perform a “shutdown -r now”, only your process space reboots! And a reboot in a container takes seconds at worst. All other processes running on the server in both other containers and the root OS are invisible as is their file systems.

The startup speed is achieved because the root OS scheduler is already running and just needs to start a copy of the Containers init process, which in turns starts ALL the initial applications. The boot process is not required as the root OS has already performed it and is running 🙂


When you SSH into the root OS you can see all the containers with “ps” and “top” etc and you can control them with the suitable command tool like vzctl in the Open VZ Container implementation. One of the neat things you can do to a Container is to on the fly add RAM and CPU. If you add more CPU’s to a container, the effect is immediate as the scheduler for the container simply starts more processes against the CPU’s, so if 2 CPU cores were allocated to the Container, then you up it to 4, the scheduler will (on the next cycle) starts 4 processes and “top” reports more CPU’s in its output. It makes sense because all cores are already allocated to the root OS, therefore there is no down time to allocate CPU’s on the fly as there is in VMWare or Hyper-V 🙂

The main commercial player in the container world is Parallels. They are also one of the largest contributors to Linux Kernel development as it enables them to get their code in at kernel level rather than patch level. Open VZ is their public offering of containers. There are numerous players in the Container Space who leverage off the work done by Parallels, so some research on your true requirements will be needed to determine what Container offering is right for you.

Update: December 2013 – I am now testing Proxmox, It has a KVM and OpenVZ virtual Environment in one package.

Containers can also be migrated between Container servers just like ESXi VM’s can be migrated between ESXi hosts in vCentre, but without any license issues! At a technical level, a migration involves an rsync of the filesystem with tracking by the kernel of changes and a copy of the memory pages, again with tracking of any changes, then a final pause of the Container processes to sync the changed blocks and pages before the scheduler starts the process on the new host and performs a network MAC handover so traffic flows to the running Container on the new host.

Disk Space

The disk image for a container is a file just like VMWare or Hyper-V, the file sits inside the Hosts filesystem and is mounted into the hosts root filesystem but the running container can only see the disk image as a filesystem, not the mount point below it (just like a chroot’d environment. To the host, the containers disk image now looks just like a mounted filesystem, but it does not show up using the “df” command. There is a distinct advantage to this from a system administrators point of view, you can scan the OS file system of ALL containers to find malware and perform admin tasks without having to enter each container environment separately. Expanding the disk is just a case of adding space to the disk image file and then using the standard Linux tools to resize the mounted file system.


A very cool feature of Containers is that the installed software base can be shared from the host using a new varient of the traditional symbolic links. If a file from a templated package is modified by the container, a copy is made and placed into the containers file system and this replaces the virtualized symbolic link. As a result of this HUGE space savings in disk can be made if common packages are shared. So if Http is installed in 40 containers only 1 install is required at the root level.


To date the only issue I have found with containers is iptables and connection tracking. Turning it on at the host OS level takes a large amount of host OS resources to track every connection for all containers so its turned off in the host in the Open VZ implementations by default. As a result, this makes most OUTPUT rules in iptables unavailable as you cannot track the return packets without connection tracking 😦

Also of note is the inability to use SE Linux, it must be disabled to use containers. I don’t use SELinux so its no big loss to me, but I am sure there are some implementations that need it. For security, I use a Fortigate Firewall in a HA cluster to block unwanted traffic and verify allowed traffic.

Container Density compared to ESXi

A container footprint is significantly smaller than an ESXi VM, and due to the performance increase in container environments the ability to run more containers on a given bit of hardware is significant compared to what can be run in an equivalent VMWare environment.

Who uses Containers?

I work for a cutting edge boutique cloud hosting company that uses containers extensively to offer hosting of all the common LAMP environments as well as 100’s of windows 2003 and 2008 environments (using Virtuoso Containers). Its logical that most hosting providers in the Linux space would be doing the same as the scaling and density makes it the logical choice. With a heavy dependence on 99.99% up time, billing, management and a true cloud service that’s infinitely scale-able its the obvious choice. For many non-hosting commercial businesses, ESXi has the market leadership and the container model does not quite work. But, with containerized applications about to launch (see “What is Docker“), business could technically move away from hosting VM’s to hosting apps. Look out VMWare!

Where to for ESXi?

I’m going to make a bold prediction and say that within 2 years (by 2015) VMWare will have a Container based ESXi offering targeted directly at cloud service providers and featuring Docker support. The environment will most likely be cloud linux or a CENTOS/Red Hat derivative. Lets see what comes of it.

Other Container Environments

The other Container based offerings include LXC which is not classed as production ready (as I write this) and uses cgroups (see Linux Cgroups article for more details) and Linux-VServer which is based on a chroot Jail model which provides more of a Security context.

The closest contender to Open VZ would be Cloud Linux which provides a commercial product offering. Its also offered on Amazon Web Services so its by far the most stable alternative. Cloud Linux is also the cloud offering from Parallels which are the leaders in Container hosting for web host’ers.

Articles on competing Container Offerings

Application Containers

1 thought on “Containers”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s