Configuring iSCSI network for Windows Server 2012

While setting up a new network for iSCSI operation here are the things we need to check and configure:

Overall Configuration:

Check the general notes here: especially the “Networking best practices section” there.

The most relevant parts are:

  • Use a a separate LAN or setup VLAN for iSCSI traffic
  • Use CHAP (at least)
  • Configure Jumbo Frame size
  • Use multiple NICs
  • Configure MPIO, use vendor’s DSM if available.


Quick Notes:

  • For making a port a member of a VLAN, set it to be “untagged”.
  • The max. frame size is dictated by two things – your NICs and your switches. Check all the NICs and switches you have in all your servers which will be using your iSCSI network. Then, note down the lowest number across everything. Configure that number on all NICs.
  • Test everything again and again before you put it into production

Detailed Configuration:

There are many good guides on the internet. Here are a couple I followed while configuring my systems and writing this blog post:

So here is how I did it:

Configuring the Server:

First, I updated the drivers of the NICs on my servers.

Then I tested if Jumbo frames are already setup by running this command:

ping -f -l 9216

Pinging with 9216 bytes of data:
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.

Ping statistics for
    Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

Notice my result? It means I do not have Jumbo set, yet. Before we set Jumbos, we need to find out what is the max size my NIC will accept. The easiest is to open network properties and there is a drop down menu there. Note the highest number. You can now select that highest number and hit OK. You can see the current setting via a script / command but you need to know if your NIC will accept that number first.

Get-NetAdapterAdvancedProperty -DisplayName “Max Ethernet Frame Size”, “Jumbo Packet”

See my NIC’s options here:


It will take a max of 9014. You cannot set it to anything ……meaning you can choose only one of these three options.

After setting the Jumbo packet size on my NICs servicing the iSCSI network, I tried the ping command again. This time I get this message:

ping -f -l 8972

Pinging with 8972 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.

Ping statistics for
    Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

This means that the OS is now able to send large packets without fragmenting them but the SAN is not responding to them. More configuration is required on the Switch side…we need to enable Jumbo frames there.

Configuring the network Switch:

I looked into our Switch’s documentation and changed these settings:

  1. Enabled Jumbo frames for the iSCSI VLAN
  2. Left the max frame size to the default – 9198
  3. Set the ports’ speed to 1GB or 10GB, instead of Auto
  4. Set flow control to “on”

About why I left it to the default 9198 and not set it to 9000. My research on the ‘net pointed me to different schools of thought. Some say it should be set to 9000 everywhere (NIC, Switch, SAN) ……..others say that NIC and SAN should be the same but switch should be higher because it will let some protocols with higher overhead pass. I even tested it myself….if I set the switch to 9000 it wont let a ping of 8972 pass through!

Lets ping test again but note that since ICMP has a 28 byte overhead, so we should be able to ping with 9014-28=8986 packets. However I find out that we cannot. The max I can ping with is  8972. So somewhere, the packet size is limited to 8972+28=9000. The switch is set to a max of 9198, so I bet the SAN is set to 9000. Upon checking the documentation of the SAN, 9000 is the highest it supports.

Configuring the SAN:

On the SAN I changed the

  1. Frame size to 9000
  2. flow control to on/on instead of Auto.
  3. Speed to 10GB instead of Auto


I am not too sure about the need to do this. After setting Jumbos everywhere, I got a meager 4% increase in throughput.

Be Sociable, Share!

Leave a Reply

Your email address will not be published. Required fields are marked *