Microsoft Windows Server 2016 Hyper-V networking is extremely powerful. In fact, the platform has matured greatly in the realm of networking. Hyper-V has come a long way in the realm of network virtualization and the removal of dependence away from the actual physical networking infrastructure. Microsoft Windows Server 2016 Hyper-V provides much more robust options for networking than previous versions of Hyper-V. One of the simplified and more capable features/functionality within the Windows Server 2016 Hyper-V platform is converged networking.
The term “converged” is used in many acronyms these days with converged systems such as HCI and other platforms used with the terminology.
What is meant by converged networking in regards to Windows Server 2016 Hyper-V?
What benefits do administrators gain from implementing a converged network model when implementing Windows Server 2016 Hyper-V hosts and what are the requirements?
How is converged networking implemented?
How does this affect physical networking requirements moving forward?
What is Windows Server 2016 Hyper-V Converged Networking Requirements and Benefits?
Windows Server 2016 Hyper-V converged network interfaces allows exposing Remote Direct Memory Access (RDMA) through a host-partition virtual NIC (vNIC) so that the host partition services can access Remote Direct Memory Access on the same NICs that the Hyper-V guests are using for TCP/IP communication. Prior to the converged network design in Windows Server 2016, management (host partition) services making use of RDMA had to use dedicated network interface cards that were RDMA enabled. Network cards that were backing Hyper-V virtual switches could not be used.
Windows Server 2012 architecture on the left and Windows Server 2016 architecture on the right (Image courtesy of Microsoft)
This limitation has been lifted with converged networking as it allows both types of workloads to share the same network interface card carrying both management users of RDMA and Hyper-V guest network traffic. Hyper-V does something interesting here, when you deploy the converged NIC with Windows Server 2016 Hyper-V hosts and virtual switches, RDMA services are exposed to the host processes using RDMA over converged Ethernet (RoCE). RoCE is a network protocol that allows remote direct memory access over an Ethernet network. This is made possible by Switch Embedded Teaming described below.
Windows Server 2016 does not require a NIC team to converge networks in a Hyper-V host. The physical NICs in the host are connected to the virtual switch. This is called using Switch Embedded Teaming (SET). SET is an alternative to NIC teaming. It allows for converging networks using the virtual switch. However, in Windows Server 2016, a point to note, you can use the converged network configuration with or without Switched Embedded Teaming (SET).
SET characteristics and considerations:
- Physical networking of the NICs must be the same across the link aggregation for failover to work properly
- SET does not span hosts
The SET technology allows for many benefits including the converged RDMA technology which allows many benefits:
- Using two RDMA enabled NICs, enabled Data center Bridging (DCB) on them which is recommended
- Convergence of the rNICs using a SET switch
- Connecting of virtual machines to the SET switch
- Assigning QoS rules by creating virtual NICs in the management OS connected to the SET switch
- Enabling RDMA on management virtual NICs
SET is compatible with the following networking technologies in Windows Server 2016 Technical Preview.
- Data center bridging (DCB)
- Hyper-V Network Virtualization – NV-GRE and VxLAN are both supported in Windows Server 2016 Technical Preview
- Receive-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if any of the SET team members support them
- Remote Direct Memory Access (RDMA)
- SDN Quality of Service (QoS)
- Transmit-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if all of the SET team members support them
- Virtual Machine Queues (VMQ)
- Virtual Receive Side Scaling (RSS)
SET is not compatible with the following networking technologies in Windows Server 2016 Technical Preview.
- 802.1X authentication
- IPsec Task Offload (IPsecTO)
- QoS in host or native OSs
- Receive side coalescing (RSC)
- Receive side scaling (RSS)
- Single root I/O virtualization (SR-IOV)
- TCP Chimney Offload
- Virtual Machine QoS (VM-QoS)
RDMA has thus been mentioned quite often. Converged NIC technology requires RDMA technology.
What is RDMA?
RDMA allows directly transferring data to or from application memory and eliminates paths along the way such as through data buffers or CPUs, caches, and other hardware. This reduces latency and allows data to be transferred quickly.
The Windows Server Catalog allows finding network adapters that are certified for use with Windows Server 2016 and converged networking:
Converged Networking in Windows Server 2016 Hyper-V requires:
- Running Windows Server 2016 Standard or Datacenter
- RDMA enabled network cards
- Hyper-V role must be installed
Converged Networking overall has the benefit of being able to logically combine physical network interface connections which allows the administrator to be able to simply used trunk ports on the physical switch to plumb traffic through to the rest of the network and appropriate VLANs. Additionally, QoS policies can be applied as well.
Configuring Windows Server 2016 Hyper-V Converged Networking
We can utilize PowerShell to configure a Hyper-V Virtual Switch that is configured for embedded teaming (SET).
New-VMSwitch -Name ConvergedSwitch -AllowManagementOS $True -NetAdapterName NET01,NET02 -EnableEmbeddedTeaming $True
To verify RDMA capabilities on an adapter, we can use the command:
To enable RDMA capabilities if not already enabled we can use the command:
PowerShell commands to add vNICs created on the management OS and bound to a vSwitch:
Add-VMNetworkAdapter -ManagementOS -Name “Management-100” -SwitchName “ConvergedSwitch” MinimumBandwidthWeight 10
Add-VMNetworkAdapter -ManagementOS -Name “LiveMigration-101” -SwitchName “ConvergedSwitch” MinimumBandwidthWeight 20
Add-VMNetworkAdapter -ManagementOS -Name “VMs-102” -SwitchName “ConvergedSwitch” MinimumBandwidthWeight 35
Add-VMNetworkAdapter -ManagementOS -Name “Cluster-103” -SwitchName “ConvergedSwitch” MinimumBandwidthWeight 15
We can add VLANs to the virtual NICs with the Set-VMNetworkAdapterVlan command:
$Nic = Get-VMNetworkAdapter -Name Management-100 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 100
$Nic = Get-VMNetworkAdapter -Name LiveMigration-101 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 101
$Nic = Get-VMNetworkAdapter -Name VMs-102 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 102
$Nic = Get-VMNetworkAdapter -Name Cluster-103 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 103
Windows Server 2016 Hyper-V includes some powerful new networking technologies and features that are found in Windows Server 2016. The converged networking functionality with Windows Server 2016 is one of those powerful new features that enables much more efficient and logical management of network traffic for Hyper-V administrators. It facilitates making that abstraction from the physical network infrastructure with network traffic flows from Hyper-V hosts. Utilizing such technologies as SET Switch Embedded Teaming and RDMA, Windows Server 2016 can team at the virtual switch layer.
Additionally, management host partition services in Windows Server 2016 no longer have to use dedicated network interface adapters. This gives administrators much more flexibility in deployments as well as control of traffic flows, adding QoS and VLAN tagging, etc. Administrators can now maximize performance for various traffic types using fewer switch ports, less cabling hassles, and better management of resources all from within the Hyper-V management plane itself.
Be sure to take advantage of the new network convergence features in Windows Server 2016 as it is certainly the way forward for building out Hyper-V networks in the enterprise datacenter.