Traditional storage environments for virtualization environments has been the classic SAN or NAS device connected to the hypervisor hosts. Many organizations today are still running their production workloads with SAN or NAS backed storage devices. These traditional means of storage are still very viable solutions for storage connected to virtual environments. These provide great features and functionality and can even provide vendor-specific features and functionality that can provide premium features and capabilities that are powerful especially related to virtualization.

Table of Contents

  1. Overview of iSCSI Architecture
  2. What is iSCSI?
  3. Hyper-V Design Considerations with iSCSI Storage
  4. Hyper-V Windows Configuration for iSCSI
  5. Verifying Multipathing
  6. Concluding Thoughts

Hyper-V is a very capable, enterprise-ready hypervisor that is very much able to work with the traditional storage designs, including SAN and NAS devices.

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!
  • What are some architecture considerations when using an iSCSI SAN with Hyper-V environments?
  • What about network considerations and storage considerations specific to using iSCSI with Hyper-V?
  • What about the Windows configuration?

Let’s take a look at Hyper-V Cluster iSCSI SAN Design considerations and examine the architectural configuration of iSCSI used with Hyper-V storage.

Overview of iSCSI Architecture

Most who are familiar with SAN storage over the past decade are well accustomed to iSCSI enabled SANs.

What is iSCSI?

The term iSCSI stands for Internet Small Computer Systems Interface and allows for IP based storage that enables block-level access to storage devices. The iSCSI commands are encapsulated in the TCP/IP packets. One of the huge advantages of using iSCSI for storage is, it allows using the traditional Ethernet constructs that already exist in most enterprise data centers. This means that iSCSI can be transmitted over existing switches and cabling, even alongside other types of network traffic. Aside from LAN connectivity, iSCSI commands can even be transmitted over WANs and even the Internet.

Download Banner

SANs that are enabled with iSCSI present storage targets to clients who are initiators. In a virtualization environment, the clients or initiators are the hypervisor hosts. The targets are the LUNs that are presented to the hypervisor hosts for storage. The iSCSI LUNs act as if they are local storage to the hypervisor host.

Hyper-V Design Considerations with iSCSI Storage

When thinking about properly designing any Hyper-V cluster, high availability and redundancy of resources should always be part of the design. This includes having multiples – Hyper-V hosts, SAN switches, cabling paths, SAN devices with multiple controllers, etc. With that being said, for iSCSI physical hardware:

  • Multiple Hyper-V Hosts – Configure at least two Hyper-V hosts in a cluster with three or more being recommended for increased HA
  • Redundant network cards – Have two network cards dedicated to iSCSI traffic
  • Redundant Ethernet switches – Two Ethernet switches dedicated to iSCSI traffic. Cabling from redundant network cards should be “X-ed” out with no one single path of failure to storage from each Hyper-V host.
  • SAN with redundant controllers – A SAN with multiple controllers (most enterprise-ready SANs today are configured with at least (2) controllers). This protects against failures caused by a failed storage controller. When one fails, it “fails over” to the secondary controller.

With Windows Server 2016, the network convergence model allows aggregating various types of network traffic across the same physical hardware. The Hyper-V networks suited for the network convergence model include the management, cluster, Live Migration, and VM networks. However, iSCSI storage traffic needs to be separated from the other types of network traffic found in the network convergence model as per Microsoft best practice regarding storage traffic.

With network configuration, you will want to configure your two network adapters for iSCSI traffic with unique IPs that will communicate with the storage controller(s) on the SAN. There are a few other considerations when configuring the iSCSI network cards on the Hyper-V host including:

  • Use Jumbo frames where possible – Jumbo frames allow a larger data size to be transmitted before the packet is fragmented. This generally increases performance for iSCSI traffic and lowers the CPU overhead for the Hyper-V hosts. However, it does require that all network hardware used in the storage area network is capable of utilizing jumbo frames.
  • Use MPIO – MPIO or Multipath I/O is used with accessing storage rather than using port aggregation technology such as LACP on the switch or Switch Embedded Teaming for network convergence. Link aggregation technologies such as LACP only improve the throughput of multiple I/O flows coming from different sources. Since the flows for iSCSI will not appear to be unique, it will not improve the performance of iSCSI traffic. MPIO, on the other hand, works from the perspective of the initiator and target so can improve the performance of iSCSI.
  • Use dedicated iSCSI networks – While part of the appeal of iSCSI is the fact that it can run alongside other types of network traffic, for the most performance, running iSCSI storage traffic on dedicated switch fabric is certainly a best practice

On the storage side of things, if your SAN supports Offloaded Data Transfer or ODX, this can greatly increase storage performance as well. Microsoft’s Offloaded Data Transfer is also called copy offload and enables direct data transfers within or between a storage device(s) without involving the host. Comparatively, without ODX, the data is read from the source and transferred across the network to the host. Then the host transfers the data back over the network to the destination. The ODX transfer, again, eliminates the host as the middle party and significantly improves the performance of copying data. This also lowers the host CPU utilization since the host no longer has to process this traffic. Network bandwidth is also saved since the network is no longer needed to copy the data back and forth.

Hyper-V Windows Configuration for iSCSI

There are a few items to note in configuring iSCSI connections in Windows Server. You need to add the needed component to a Windows Server installation to properly handle MPIO connections. First, you add the MPIO feature to the Windows Server installation.

Hyper-V Cluster iSCSI SAN Storage and Network

Adding Multipath I/O to Windows Server 2016

The installation of MPIO in Windows Server 2016 installs quickly but will require a reboot.

Hyper-V Cluster iSCSI SAN Storage and Network

About to begin the installation of MPIO in Windows Server 2016

Once MPIO is installed and the server has been rebooted, you can now configure MPIO for iSCSI connections.

  • Launch the MPIO utility by typing mpiocpl at a run menu. This will launch the MPIO Properties configuration dialog
  • Under the Discover Multi-Paths tab, check the Add support for iSCSI devices check box
  • Click OK

Hyper-V Cluster iSCSI SAN Storage and Network

Configuring MPIO for iSCSI connections

Under the iSCSI configuration, launched by typing iscsicpl, you can Connect to an iSCSI target

Hyper-V Cluster iSCSI SAN Storage and Network

Click the Connect button to setup multipath for an iSCSI target

The Connect to Target dialog box allows selecting the Enable multi-path checkbox to enable multipathing for iSCSI targets.

Hyper-V Cluster iSCSI SAN Storage and Network

Click the Enable multi-path check box

You will need to do this for every volume the Hyper-V host is connected to. Additionally, in a configuration where you have two IP addresses bound to two different network cards in your Hyper-V server and two available IPs for the iSCSI targets on your SAN, you would create paths for each IP address connected to the respective IP address of the iSCSI targets. This will create an iSCSI network configuration that is not only fault tolerant but also able to use all available connections for maximum performance.

Verifying Multipathing

There is a powerful little command line utility that allows pulling a tremendous amount of information regarding multipath disk connections – mpclaim.

Launch mpclaim from the command line to see the various options available.

Hyper-V Cluster iSCSI SAN Storage and Network

The mpclaim utility displays information regarding multipathing

To check your current policy for your iSCSI volumes:

  • mpclaim -s -d

To verify paths to a specific device number:

  • mpclaim -s -d < device number >


Concluding Thoughts

Traditional SAN storage arrays are still a very powerful and viable configuration for today’s hypervisors. They are extremely performant, and well-proven technology. The iSCSI block storage protocol provides an extremely powerful protocol to run block storage on top of the existing Ethernet infrastructure. There are considerations to be made when designing and architecting an iSCSI storage solution for Hyper-V. Of utmost priority when designing a storage network is considering redundancy. You always want to have multiple paths to storage and multiple hardware devices that provide those paths. Utilizing jumbo frames and MPIO provides definite performance gains as well. Configuring MPIO from Windows Server is a necessary configuration step that allows Windows to take advantage of multiple paths to iSCSI storage. By designing an iSCSI storage solution correctly and keeping in mind various technologies that allow iSCSI to be both performant and resilient, Hyper-V administrators can achieve high availability without sacrificing performance.

Related Posts:

Setup basic software iSCSI in VMware vSphere
NFS vs iSCSI – Which is best for you?
VMware for Beginners – How to Create and Configure iSCSI Datastores: Part 9(a)

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post