There are waves of virality when concerning emergent technologies. Containers are currently surfboarding the Point Break wave. Which means, it is here with an explosive entrance and it is here to stay. This entrance happened a couple of years ago. Then, why am I still talking about it?

Because, there still are certain questions prevailing like- What’s the future of containers and VMs? How long has it been around?

For the beginners to keep up, let us start with the basics – What are Containers?

We’re going to find answers to these three questions now, of course in the reverse order.

To understand Containers, we need to understand virtualization. When you emulate a computer on top of a real computer using a hypervisor software, that abstracts the hardware, it is known as server virtualization. So, on a single powerful physical machine, there may be multiple virtual machines running, thus exploiting maximum computing ability.

Vembu BDR Suite

Backup your Virtual & Physical Machines
Free Forever
Agentless Backups, Flexible Scheduling, Multiple Recovery Options

This means that, for every VM there has to be separate space for processing, kernel, network interface, IP schemes, file systems and a hundred other things. It is not a bad thing. It is still the go-to technology. But, why the need to be dependent on Guest OS for every VM?

Imagine, instead of running multiple virtual machines on a single physical hardware, you can run an application and its associated data in a confined user space (the limited memory a process is allowed to access) on a single kernel. What you imagined is called a Container.

What is Containerization?

Containerization is an OS-level virtualization to deploy and run distributed applications without launching a separate virtual machine for each application

Answering our second question- How long has it been around?, takes us back more than three decades, sometime in 1982, when a weirdly named OS-level virtualization, called Chroot, was attempted where a virtualized copy of the system software could be created. Nobody beyond a niche group of virtualization fans understood what it meant then. 23 years later, Solaris released the first of the containerized applications basing on Chroot. They even called their containers, Chroots on Steroids. Fast forward to 2008, Linux Containers (LXC) became a more refined version of this technology using cgroups functionality in Linux Kernel.

This point in the timeline that set forth a series of events leading to our current state of Containers, is to be closely observed. What did LXC get right that eventually led to Docker being built upon it? Containers became a success because Docker was a success which was because LXC was a success.

So, to proceed further, we need to understand LXC. Let us Go back to 2008!

In LXC, namespaces and cgroups were used. Remember how each of the VMs think they’re the only ones running on the hardware? Similarly, namespaces give a group of processes the illusion of running singular process in the system. They basically limit the view of each process over various kernel resources. There are currently 6 completed and implemented namespaces. They are and they are for: UID & GID, Interprocess communication, Network, Process ID, file system and domain/hostname.

The aforementioned resources can be controlled from an unified interface called cgroups (control groups). With these two functionalities, kernel resources are isolated for each of the processes. Apart from these, Union File System is one of the core functionalities that make Containers lightweight.

It is basically a copy-on-write file system. This means that, in a process, where there are multiple callers accessing the same resource, the resource is not copied and sent to each of the callers. The resource stays where it is, while a pointer is given to the callers to read the resource. When a modification needs to happen, and only then, a copy of the resource is taken and the changes are done on that, leaving the original untouched.

Only the changes made are stacked as delta-change images over the base image (the minimum viable Linux directories and utilities) forming a single file system. This helps in better segregation of the layers and avoiding the creation of duplicates.

Even with these features existing for a long period, nothing much happened up until recently. Why? Any technology needs to become accessible to pick up steam.

Docker did this for Containers. It is the software that performs the whole containerization. It made the process of building one- easy, running one- faster and gathered a huge community that made many publicly available images that are used by many. “Many” is an oversimplification considering 12 billion images have been pulled (docker jargon for downloaded) last year.

So, a user creates a Docker File that is used to build the Docker Image along with the application and its dependencies. This is communicated to Docker Daemon using Docker Client. Users can either upload an image or download a publicly available image. This packs the application and its dependencies into an invisible container, thus eliminating the overhead you might incur when starting and maintaining a VM.

This leads us to the question I put forth at the beginning of this blog, i.e, What’s the future of Containers and VMs?

Containers are preferred for instant boot, modularity, portability and multiple copy use of a single app. When the jargon is stripped off, it means, applications use the same OS, thus overcoming the delay in booting up. When I said modular, I mean the microservice that containerization offers. An application can be split into modules based on its operation and each operation can be separately and instantly created. Base images can be distributed to different machines by downloading from the registry, making it portable.

Where does VM win then? Basically in every other case. Containers work only as long as the applications run on the same OS. What if a user needs to use different applications that run on different operating systems? Virtual Machines provide a reliable solution. They provide better security. The problems that can creep up with the sprawling of container segments- Managing numerous micro segments- can be a burden. When you’re creating a container, it is a sandbox, meaning adding other dependencies overtime is a cause of concern. These pain points don’t exist in VMs. This does not mean VMs beat Containers and neither does it mean Containers will wipe out VMs. They aren’t even against each other.

The most efficient and most popularly used strategy is to have a physical machine that has multiple VMs and each VM has multiple containers. There is a reason Containers-on-VMs-on-Physical machine format is done instead of just jumping face first into containers. Users should optimize their computing resources but at the same time should have the freedom to choose the way they want to use the resources.

Containers solve the problems posed by VMs and help in progressing in the optimization of the computing environments.

Experience modern data protection with this latest Vembu BDR Suite v.3.9.0 FREE edition. Try the 30 days free trial here:

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Like what you read? Rate us
Virtualization vs Containerization – Past, Present, and Future
Rate this post