VMware, Synology, iSCSI, and Multipath I/O (MPIO)

Prerequisites

Here is a good article from VMware docs that explains this. https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-DD2FFAA7-796E-414C-84CE-1FCC14474D5B.html

The Gist

There is more than one physical adapter connected on separate subnets. Each subnet provides a path to the iSCSI target.

How I configured the ESXi host with the vSphere Client

I created two VMkernel adapters, one for iSCSI-A and one for iSCSI-B. You can see from the screenshot below that I have one VMkernel adapter configured on the 172.16.11.0/24 subnet and the other adapter on the 172.16.12.0/24 subnet. These are layer 2 networks as I only need to connect to the Synology NAS through a switch. I do not want the traffic passing through a router; it would add unnecessary latency.

Once the VMkernel adapter is configured, I configured the Teaming and failover policy for each adapter. Here is where I “pin” an iSCSI adapter to a physical adapter (a vmnic). My particular setup is using vmnic2 and vmnic3 which are ports three and four on the back of the Dell PowerEdge. VMware starts with vmnic0 and that is why the numbers do not align. For a sanity check, verify MAC addresses if you are unsure.

Dell PowerEdge ServerVMware Physical Adapter
Port 1vmnic0
Port 2vmnic1
Port 3vmnic2
Port 4vmnic3

Regardless, I configure (think “pin”, here) the vmk1 VMkernel adapter to vmnic2. I make it the Active adapter and set vmnic3 to be an Unused adapter.

I configure the vmk2 VMkernel adapter to vmnic3. I make vmnic3 the Active adapter and vmnic2 the Unused adapter.

This configuration will allow you to bind the network ports.

How I Configured the Synology NAS

Mind you, this configuration does not allow management of the device without adding a virtual network adapter to my management Windows 10 virtual machine attached to the iSCSI-A subnet. This is where Synology fails in my opinion. I love the device, but there should be at least three adapters. One for only management and two additional to carry the actual storage traffic. In the future, I will likely just go back to running a TrueNAS Core box. (https://www.truenas.com/download-truenas-core/)

I configure my LAN 1 adapter on the Synology with an IP in the subnet I specified for the iSCSI-A VMkernel adapter on ESXi host, in this case 172.16.11.0/24. I also configured the LAN 2 adapter with an IP in the subnet for the iSCSI-B VMkernel adapter on the ESXi host. Be sure to set VLAN ID tags and MTU correctly as well!

Open the iSCSI Manager and edit the target you are going to use for your ESXi hosts. Be sure to select the All network interfaces radio button on the Network Binding tab. You will also see a list of your configured Network Interfaces from Synology.

If you have more than one ESXi host you are connecting, you will also want to check the box to Allow multiple sessions on the Advanced tab.

Configure the Storage Adapter on the ESXi Host

Go back to your ESXi host and configure the storage adapter. I only use a software adapter as this is more than sufficient for my use.

Verify the Network Port Binding that was configured earlier.

On the Dynamic Discovery tab, add the iSCSI target you configured on the Synology. You only need to add one of the configured addresses.

Once the address is in the table, click Rescan Adapter. This will reach out and connect to the Synology.

Back on the Synology, you can verify that your ESXi hosts are connected by checking the Service Status.

Back in sthe vSphere Client, there should now be a Storage Device under the Devices tab.

If the target was already formatted with the VMFS file system, you should be ready to consume it now. If this is a new target, it will have to be formatted only once from any of the shared hosts. Once it is formatted, it should appear on all the other hosts. Sometimes, Rescan Storage will be necessary to query for the new device or update the name if it is not present.

Edit the Multipathing Policy

The very last thing that should be done is to adjust the multipathing policy. No point in going through all this trouble to just send and receive storage data over one path! This will have to be done from each host for each iSCSI target.

Go back to the host and click Storage Devices. Select the datastore you want to configure, and then choose Actions besides the Multipathing Policies section.

Change the Path Selection Policy to Round Robin (VMware) to take advantage of the multiple paths you have configured! You will see that the status of the configured paths show Active (I/O) now. This indicates that the paths are now participating in the storage traffic.

Conclusion

I highly recommend setting up iSCSI in a Round Robin multipathing policy, especially for at home labs where there are likely only 1 Gbps links. This will at least provide 2 Gbps effectively. It is a pain to set up the first time, but once you have it, you will likely see the benefits. This is also a good practice in Production environments, especially with 10 Gbps, 25 Gbps, or 100 Gbps iSCSI adapters on the market.

Leave a Reply

Your email address will not be published. Required fields are marked *