One of the more under appreciated features that Microsoft introduced in Windows Server 2016 Hyper-V is Resilient Change Tracking. In order to understand what this feature does, and why it is important, you need a to know a little bit about backup.
Decades ago, monolithic server backups were the norm. Organizations typically set aside a nightly backup window during which all of the network servers would be backed up. Because backups were not yet a mature technology, it was common practice to create full backups every night.
Over time, this approach to backing up data became completely impractical. Data has a tendency to accumulate over time, and if left unchecked, will grow to the point that it is impossible to back everything up within a finite backup window.
Backup vendors developed a number of technologies to help cope with this problem. Some solutions were based on faster backup hardware. Other solutions introduced new backup architectures, some of which are still in use today. One of these architectures was the incremental backup.
Incremental backups work by backing up everything that has changed since the last backup. Early incremental backup solutions worked by backing up newly created or modified files. Newer solutions work at the storage block level instead. The idea is that there is no need to back up a huge file, if only a small part of the file was modified. It is much more efficient to back up the storage blocks that contain the modification, rather than backing up the entire file.
Although incremental backups have been a widely used backup technology for quite some time, Hyper-V introduced some new challenges that legacy incremental backup solutions were not equipped to handle.
The reason why Hyper-V posed a problem for legacy backups is because Hyper-V virtual machines make use of virtual hard disk files in either VHD or VHDX format. Each of these virtual hard disk files contains its own file system that is completely independent from the host.
So with this in mind, imagine what would happen if you ran a non-Hyper-V aware incremental backup on a Hyper-V host. The backup process would probably be able to identify changed blocks belonging to virtual hard disk files, but a simple block level backup would probably be inadequate.
As you probably already know, backups of Microsoft servers generally use the Volume Shadow Copy Services. Blindly backing up changed blocks at the host level would result in a backup that is completely unaware of the virtual machine contents. If you want to create a reliable backup of Windows VMs, then the backup application needs to be able to look inside of the virtual machines, and use the Volume Shadow Copy Services to create a VSS snapshot.
So here is where things get interesting. Microsoft did create a VSS writer for Hyper-V. However, they did not initially create an extensible mechanism for performing changed block tracking within a VM. This meant that any vendor who wanted to perform Hyper-V backups had to create their own filter driver for Hyper-V virtual machines. The problem with this is that there was no consistency among filter drivers. Each backup vendor’s filter driver was different, and a poorly written filter driver could potentially crash the Hyper-V host.
In Windows Server 2016, Microsoft introduced Hyper-V Resilient Change Tracking. This is a fancy way of saying that Microsoft finally created their own filter driver for changed block tracking within Hyper-V virtual machines. This new driver is something that any backup vendor can latch onto.
Resilient Change Tracking is enabled by default in Windows Server 2016 Hyper-V. However, it is only used if the VM configuration version is set to version 8. Furthermore, if the Hyper-V deployment is clustered, then all of the nodes that participate in the cluster must be running Windows Server 2016.
One of the big challenges that Microsoft faced when developing resilient change tracking was making the changed block data truly resilient against things like power failures or other types of failures. To address this challenge, Microsoft made the decision to store changed block data in three different locations.
The first location in which changed block data is stored is an in memory bitmap. This in memory bitmap contains the most detailed and granular changed block data.
The second location in which changed block data is stored is in a .RCT file. The RCT file is not as granular as the in memory bitmap, so it is used primarily as a fallback mechanism. For instance, if a VM is live migrated to another host, then the .RCT file will go with it, and be used as the basis for tracking changed block data.
The third and final location where changed block data is stored is an .MRT file. An MRT file is less granular than even an RCT file, but provides a fallback option that can be used in the event that the .RCT file becomes corrupted.Like what you read? Rate us