One of the major trends in IT is the transition away from a traditional disaster recovery model, in favor of am IT resiliency model. The primary driver behind this transition is the idea that outages are both expensive and disruptive, and so it is far better to create a resilient environment that is well equipped to prevent an outage, than to try to use traditional backup and recovery to put everything back to normal after an outage has already happened.

Although the logic that is driving this transition is sound, the actual implementation may be less so. The problem with the transition to resilient IT is that it implies that the concept of backup and recovery has become completely obsolete. In reality however, nothing could be further from the truth.

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!

Even if an organization has implemented sufficient resiliency to prevent an outage from ever occurring, the potential for a data loss event still exists. Data might be lost for example, as a result of a ransomware infection, human error, or any number of other catastrophic events.

This is not to dismiss the importance of resilient IT. Quite the contrary. When properly implemented, resilient IT can reduce costs, while also nearly eliminating the chances of an outage.

Resilient IT is largely achieved through the use of redundancy. An organization might for instance, take advantage of redundant storage, failover clusters (in which virtual machines are made highly available), a secondary source of power, and quite possibly even a redundant datacenter. As previously mentioned however, resiliency can usually prevent an outage from occurring, but it cannot prevent all types of data loss.

Download Banner

Because of this, IT pros must consider how to implement backup and recovery capabilities in a way that allows key business workloads to remain “always on”. After all, legacy solutions that take hours or days to complete a restoration simply are not acceptable in today’s highly dynamic, IT environments.

One way of allowing highly virtualized environments to recover from a data loss event, without causing mission critical workloads to be taken offline for an extended period of time is to use a backup and restoration solution that supports quick VM recovery.

Quick VM recovery does not refer to a backup and recovery solution that is marginally faster than the previous generation of backup and recovery products. Instead, the term refers to the ability to revert a virtual machine to a previous state almost instantly.

There are two technologies that make quick VM recovery possible – server virtualization and disk-based backup targets.

Quick recovery would not be possible from a tape-based backup. Part of the reason for this is that tape requires linear read / write operations, and does not support random I/O. Although this does limit the speed of recovering data from tape, there are other factors at work.

One of the reasons why the IT industry has largely made a transition to disk-based backups, is that disk-based backups allow for continuous data protection. Rather than data being backed up on a nightly bases as a part of a single monolithic backup operation, backups are instead performed on a nearly continuous basis. In doing so, a storage monitoring mechanism uses changed block tracking to keep track of which storage blocks have been modified since the most recent backup operation. Because the backup software backs up only the storage blocks that have been modified, continuous data protection solutions are far more efficient than legacy incremental backup solutions that backed up files that had been modified since the previous backup. In essence, continuous data protection solutions back up the parts of a file that have been modified, rather than attempting to backup an entire file.

This approach eliminates most of the overhead associated with the backup process, and it also lays the ground work for quick recoveries. The backup target (the storage array that is being used for disk-based backups) contains full copies of each virtual machine, plus copies of all of the storage blocks that have been modified over time. This allows for very granular point in time recoveries to be performed, but when combined with server virtualization, the way that the data is stored also allows for a quick recovery.

The idea behind a quick recovery is that because the data is stored on a disk-based backup, the data can be assembled into a virtual hard disk, and a virtual machine can be attached to the virtual hard disk, and run directly from the backup server. This point in time virtual machine copy can be brought online almost instantly, and snapshots can protect the backup against unwanted modification.

Once the virtual machine has been brought online on the backup server, the workload becomes available for normal use. Meanwhile, a traditional restoration takes place in the background. Once the restoration completes, any write operations that have taken place since the beginning of the recovery process are merged into the newly restored VM, and then user’s are redirected from the backup copy of the VM to the primary VM copy.

One of the nice things about quick VM recovery technology, is that it is largely hypervisor agnostic. Quick VM recovery solutions exist for both VMware and Hyper-V.

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post