Besides the high availability aspect of virtualization platforms, the other tremendous benefit of today’s virtualization hypervisors is the effective means to load balance and effectively spread around resources. Microsoft Windows Server Hyper-V Failover Clusters provide a powerful platform for not only high availability but also in resource scheduling. When determining where virtual machine resources will live, and on which host they are provided, there are a number of mechanisms that can be used to accomplish this effectively.

Table of Contents

  1. Hyper-V Cluster Load Balancing Tools
  2. System Center Virtual Machine Manager (SCVMM)
  3. Virtual Machine Load Balancing
  4. Hyper-V Preferred Owners
  5. Hyper-V Possible Owners
  6. Hyper-V Anti-Affinity
  7. Concluding Thoughts

Quick Bites:

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!
  1. Hyper-V Failover Clusters offer both high availability and resource balancing benefits
  2. Control cluster load balancing through tools like System Center Virtual Machine Manager (SCVMM) and features like Dynamic Optimization
  3. Virtual Machine Load Balancing in Windows Server 2016 uses metrics like CPU utilization to migrate VMs for optimal resource allocation
  4. Hyper-V Preferred Owners and Possible Owners settings prioritize host selection during failover scenarios
  5. Anti-Affinity rules prevent certain VMs, like domain controllers, from being on the same host for increased availability

In this post, we will take a look at how to effectively determine resource load balancing with Windows Server Hyper-V Failover Clusters. We will also look at several Hyper-V tools that allow both configuring and setting up preferences on how host load balancing takes place and how Hyper-V virtual machines are placed in the Hyper-V cluster.

Hyper-V Cluster Load Balancing Tools

There are a wide variety of means to effectively control Hyper-V cluster load balancing. This can be done in a more automated fashion using paid tools or by using manual means to configure resource allocation in Hyper-V clusters so that virtual machines are located on specific hosts with various use cases.

We will take a more detailed look at the following ways to control cluster load balancing:

Download Banner
  • System Center Virtual Machine Manager (SCVMM)
  • Virtual Machine Load Balancing
  • Preferred Owners
  • Possible Owners
  • Anti-Affinity Rules

Using the above mechanisms and tools, Hyper-V administrators can effectively control the distribution of resources and load balancing across Hyper-V clusters.

System Center Virtual Machine Manager (SCVMM)

System Center Virtual Machine Manager is basically what you would think of in the Hyper-V world as the Hyper-V “vCenter” product that allows centralized datacenter level management of the Hyper-V environment. With this centralized management of Hyper-V clusters, administrators have the ability to ensure Hyper-V cluster virtual machine resources are balanced across the Windows Server Hyper-V cluster. SCVMM has the built-in mechanism called dynamic optimization that automatically load balances Hyper-V virtual machine resources across the cluster.

Hyper-V-Load-Balancing

Microsoft System Center Virtual Machine Manager can automatically balance Hyper-V resources

Dynamic Optimization is able to perform this virtual machine load optimization by taking advantage of Live Migration enabled Hyper-V host clusters. With Live Migration, Dynamic Optimization is able to move virtual machines from one host to another to balance out host cluster resources.

A few things to consider when looking at dynamic optimization:

  • Live Migration must be enabled on Hyper-V host clusters
  • It is configured at the host group level
  • The aggressiveness of the dynamic optimization migrations is configurable (default is every 10 minutes VMs are balanced with medium aggressiveness)
  • Without Failover Clustering in place, simply setting up dynamic optimization on a host group has no effect
  • You must have two or more cluster nodes
  • You can perform ad-hoc optimization on-demand for individual host clusters by using the optimize hosts action
  • Should not be used in conjunction with Virtual Machine Load Balancing found in Windows Server 2016

Hyper-V-Load-Balancing

Hyper-V Dynamic Optimization with SCVMM

Virtual Machine Load Balancing

Virtual Machine Load Balancing, also referred to as node fairness, is a new feature of Windows Server 2016 that allows optimizing Hyper-V hosts in a Failover Cluster by identifying over-committed hosts and Live Migrating VMs from over-committed hosts to underutilized hosts in the cluster.

A few items to note around Virtual Machine Load Balancing in Windows Server 2016:

  • Live Migration is Utilized
  • Failover policies such as anti-affinity, fault domains, and others are honored
  • VM memory pressure and CPU utilization are some of the metrics used by VM Load Balancing to make migration decisions
  • The feature is customizable and can be run on-demand
  • Aggressiveness thresholds can be configured
  • Should not be used in conjunction with Dynamic Optimization in SCVMM

Hyper-V-Load-Balancing

VM Load Balancing when a new Hyper-V host is added to the cluster (Image courtesy of Microsoft)

Hyper-V Preferred Owners

The Preferred Owners setting at the virtual machine level in Failover Cluster Manager allows basically setting an affinity to a certain Hyper-V host for a particular virtual machine on a node failover scenario. When roles are drained from a particular host or if a host crashes in the cluster, the preferred owners setting is given attention when deciding which host the virtual machine resource will be migrated to.

If an administrator manually initiates a Live Migration of a virtual machine and manually specifies the host, this setting is not considered. The way Microsoft describes the Hyper-V Preferred Owners functionality basically as a priority list of how the cluster will migrate virtual machines in a failover. This also overrides selecting a host based on which host is hosting the fewest virtual machines.

Hyper-V-Load-Balancing

Hyper-V Preferred Owners allows giving priority to specific Hyper-V hosts during failover

Hyper-V Possible Owners

Hyper-V Possible Owners provides another aspect to controlling load balancing during failover. By default, Hyper-V clusters consider all nodes as possible failover candidates to host virtual machines. However, there may be use cases where you want a particular Hyper-V host to never be considered as a host node for a virtual machine in a failover. By setting the possible owners, you have the ability and control over which nodes are even considered during failover situations.

Hyper-V-Load-Balancing

Setting advanced policies including Possible Owners in Hyper-V

Hyper-V Anti-Affinity

Another tool in controlling where virtual machine resources live in the Hyper-V cluster is by using anti-affinity rules. Anti-affinity is a mechanism that allows keeping certain virtual machines on separate hosts. A common use case for anti-affinity rules is keeping domain controller virtual machines on separate Hyper-V hosts as you wouldn’t want a single Hyper-V host failure to take your entire domain offline if all the DCs reside on a single host.

Using anti-affinity keeps the virtual machines from being automatically migrated to the same host if there are other hosts in the cluster still left available. However, in the case of only a single Hyper-V host that is available, high availability of the virtual machines would take precedence over an anti-affinity rule to keep multiple domain controllers off the same host. So, in that case, the cluster would disregard anti-affinity.

This functionality is controlled by the AntiAffinityClassName property. Anti-Affinity affects the algorithm used to determine the destination node by using the following methodology:

  • Preferred node is first considered and finds the next preferred node
  • When the next node is selected the anti-affinity rules are considered to see if the destination node is a possible destination and may move on to the next node if the first selected node is affected by anti-affinity
  • If the only available nodes are hosting anti-affined groups, the Hyper-V cluster ignores anti-affinity and selects the node as the destination

Concluding Thoughts

Microsoft Windows Server Hyper-V clusters provide some really powerful mechanisms and tools that allow effectively controlling the behavior of load balancing and resource placement in the Hyper-V cluster. These tools include using System Center Virtual Machine Manager and the dynamic optimization feature that automatically balances resources between cluster nodes. Other tools for resource placement include Virtual Machine Load Balancing found in Windows Server 2016 Hyper-V as well as other resource placement tools including Preferred Owners, Possible Owners, and Anti-Affinity. By leveraging these tools for configuring resource placement, Hyper-V administrators can configure load balancing in a Hyper-V cluster to take place in a way that fits custom use cases and their specific virtual machine needs.

Related Posts:

AWS for Beginners: ALB vs NLB vs GLB – Which AWS Load Balancer Should You Choose: Part 14
How to Install Failover Cluster in Windows Server 2016

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post