The data center has always taken different forms starting right from physical machines then moving on to virtualization and hosting data onto the cloud, providing services including IaaS, PaaS and SaaS and then finally to the containers. Containers are not new. Linux containers had been in use by the developers from 2008 because of its numerous advantages in virtualization technology. Microsoft, taking into account the efficiency of Linux Containers worked with Dockers and created Windows Containers which released in October 2016 alongside Windows Server 2016.

What are Containers?

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!

Containers are lightweight, isolated, transportable operating environment with minimal resources. The term containerization refers to the encapsulation of applications with its dedicated environment that can be deployed anywhere and worked on. These containers have a separate operating system, registry, file system, namespaces etc, that are completely isolated.

Windows-Server

Image Source: http://events.linuxfoundation.org

Download Banner

Containers and Virtual Machines:

While thinking of containers, people would assume that these are similar to Virtual Machines. But this is not true. Containers and virtual machines have certain similarities and differences too. Both have specific use cases. Containers can be used with virtual machines.

Virtual machines:

Here, the virtualization is at the hardware level where the under utilization of physical resources is reduced. A bare metal hypervisor acts as a layer between the host resources and the operating system. Therefore, each virtual machine can have its separate operating system, its own memory and RAM. The host OS and guest OS can be different. This is known as server virtualization and it provides vast benefits because of the consolidation of applications within a single host.

Containers:

Containers perform operating system virtualization i.e they utilize single OS and run multiple applications on it. All the applications share a single host kernel and also the binaries/libraries. There are many engines that can create and run containers, but Docker is the one to have gained immense popularity. Containers can start in seconds. Since containers have private memory and shared operating system, they do not require a separate boot which is not the case of Virtual Machines. VMs have in-memory footprints and its own OS, hence some considerable amount of time will be spent for booting from OS.

Types of Containers:

Based on the use cases, containers can be classified into two types: System containers and application containers.

System containers were employed in cases when different services and processes should run on a single host. They containerize the host operating system. This can be considered as a Virtual Machine. But they do not employ hypervisors. Container technologies like LXC(Linux Containers), Solaris, BSD jails etc are suitable to create System containers.

In contrast to System containers, the application containers are used to pack and run a single service. There are many container technologies that are focussing towards application containers, Dockers are important among them. The advantage of application containers is that each application gets its own container so that the developers, testers and the operations team in an organization can work with it independently.

Windows Containers:

There are two runtimes for Windows Containers – Windows Server containers and Hyper-V containers.

A Windows server container is similar to Linux containers which mean that they share the same kernel. These containers are utilized when the host OS and the corresponding application run within the same trust boundary and this is applicable for multi container applications where the applications share a service from a larger application.
Hyper-V containers provide a higher level of isolation. This is because the containers are isolated from the underlying Operating system by spinning them with Virtual Machines. They have their own kernel and memory. So, Hyper-V containers take little more time to start than the Windows server containers.

Docker:

A Docker contains tools and APIs to ship, build and create applications. Docker helps to create and run applications and packs it in an isolated environment called as containers. The isolation of containers provides the user to run as many numbers of containers without interfering others. The Docker platform has a number of advantages as it can run more number of containers than the VM and also containers can be created within a VM which allows N number of applications to utilize a single host resource.

A Docker engine is the core of Dockers which runs on the host where the containers are created. It can be interacted by the Docker API or by the client. Dockers help to create images, containers, etc.

A container image is the transportable instance of the containers. It contains the files, registry and descriptors about the containers. These images are shared among the development, testing and production teams. The images can be created from the Dockerfile or Dockerhub or it can be derived from another image which acts as the base image. This way of creating an image makes it easy to deploy, starts in a few seconds and uses minimal resources. These images are built in layer i.e. if someone wants to change the contents, only the changes are built on top of the base image. The changes that are made are aptly captured by the sandbox.

Docker uses a technology called namespace that contains all the resources, files and the network ports to interact with. Namespace isolation provides the containers with the virtualized namespace that has its required resources. So the containers are restricted to use or even see other resources since these containers are fooled that they are the only application running on the host.

Container applications:

Containers are mostly used by developers and IT professionals. Developers create an application by using containers. These containers can be shared with the other teams for testing and production purpose. If a bug is found that can be cleared by the developers, the containers can again be redeployed for testing. All these changes are the images that are built on top of each. Once testing is completed, the image can be pushed to the production environment and will be made available to the customers.

Containers are iterative which helps the developers. They have an instant start, have small footprints, lightweight, standard and secure.

Google, twitter and IBM have already moved to containers to increase their efficiency. Hyper-V containers make it possible to run containers within VMs which give more isolation from its own kernel. Ideally, a VM runs one service and has a single OS. With Hyper-V containers, the users can have one container per service that leads to having multiple containers in a VM. One disadvantage of Containers is that it depends on the underlying OS. In VMs it is possible to have different guest OSes i.e. VM created from Windows OS can have Linux in it and vice-versa.

Container’s critical use is to help DevOps with the simpler flow of applications from development to testing to production. But these containers are now focussing towards micro services as well. Microservices are nothing but small pieces of software that build an application. These are designed to run a specific service with all the required resources and are loosely coupled such that they can be changed at any time. But, the use of containers for micro services can make it more efficient because of the great isolation it provides.

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post