LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/typography.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/template.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/responsive.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/k2.less

Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

VMware vSphere 5 Host NIC Network Design Layout and Configuration

THIS HAS BEEN UPDATED. PLEASE VISIT VMware vSphere 5 Host NIC Network Design Layout and vSwitch Configuration [Major Update]

 

As vSphere has progressed, my current 6, 10, and 12 NIC designs have slowly depreciated. In an effort to update these to VMware vSphere 5, I took the 2 most popular configurations of 6 and 10 NICs and updated the Visios to make them a bit more pretty. I also don't know how much longer these will be necessary as the industry moves forward with 10GbE as a standard.

 

The assumption of these physical NIC designs is that these hosts are going to be configured with Enterprise Plus Licensing so all the vSphere features can be used. I didn't create a bunch of different designs as before because ideally you would want to stick with a simple design that meets a bunch of criteria for most people. In addition, I have updated these configs for performing multi-pathing for iSCSI and removing the use of etherchannel configurations because those were mostly needed on standard vSwitch configurations. I would also recommend to start moving everything over to a vNetwork Distributed Switch configuration because it is the easiest way to standardize across all of your hosts. vSphere 5 implemented a better HA and failed host policy in vSphere 5 so the use of a hybrid solution is fading as well.

 

Another assumption that has been depicted in every diagram is the physical switch configuration. These configurations can be done via Cisco 3750Gs with cross-stack links so it's viewed as a single switch or a single highly available enterprise switch such as a Cisco 4500 or can be implemented in a vPC configuration of dual Cisco Nexus 5000 or 7000s. The most important thing to keep in mind is that the switches in these configurations are enterprise class and you aren't connecting two switches via LACP because if you are doing that then the load teaming/balancing settings needs be re-configured.

 

 

In my opinion, 6 NICs is a good solution, but 10 NICs is a better solution. 10 NICs offer better separation, redundancy, and performance characteristics.

 

In all of these configurations, I have the Virtual Machine Network portgroups set to load balance as Route Based on Physical NIC Load. This is the recommended approach because it will even distribute the load of all the virtual machines across NICs and it's balancing is recalculated every 30 seconds. You can read more about Load Based Teaming at FrankDenneman.nl

 

Every configuration also accounts for use of the new vSphere 5 feature of multiple vMotion NIC support. If you want to learn how to configure it please visit Eric Sloof's Video - Running vMotion on multiple–network adaptors

 

Also, don't forget to enable portfast on all the links that are connecting to an ESX host. Not enabling this feature on your physical switch can potentially cause loops. Read more at VMware Networking, Don't Forget STP

 

I also included the use of Jumbo Frames to be configured and enabled on the physical switches and DvSwitches. You don't have to actually use Jumbo Frames, but atleast it's configured if you ever have to because it requires a reboot of physical switches.

 

Since the 6 NIC configuration only has 2 I/O modules, the best way to reduce a failed Expansion Card or complete failure of the on-board NICs is to give 1 of each I/O module to the Virtual Machine Network and the Storage Network. Before, I would probably put one of each in the Management network, so an HA event could be started, but now that vSphere 5 introduced datastore heartbeats, I can lose my entire Management and vMotion network and continue to have the VMs function with the IPStorage and Virtual Machine Management portgroups.

 

 

vMotion in vSphere 5 will use whatever links it's given and load balance among them. Yet, if there is FT traffic going on, I don't think vMotion is smart enough to see that the link is being 70% utilized for FT traffic and that only 30% is available for vMotion. Therefore, FT traffic may be fighting now because vMotion traffic is causing contention. On the other hand, this can be alleviated via NIOC to make sure that the amount of vMotion and FT traffic are given the reflective amount of shares to put QoS on traffic. If you don't like this method, you can just get rid of the 2nd vMotion portgroup and do it old school to keep traffic separated as shown below. I would still use the option above with 2 portgroups for vMotion because you probably don't have a ton of VMs that can be used for FT.  So if you have hosts that are not hosting any FT VMs, then you are wasting bandwidth and losing that 2nd NIC that can be used for vMotions.

 

 

You will notice on here that I chose to do an etherchannel/LACP for the FT configuration. After reading VMware vSphere™ 4 Fault Tolerance: Architecture and Performance there was a network configuration issue that stated "Adding multiple uplinks to the virtual switch does not automatically result in distribution of FT logging traffic. If there are multiple FT pairs, then traffic could be distributed with IP-hash based load balancing policy, and by spreading the secondary virtual machines to different hosts."

 

I went back and forth on whether or not to take an uplink out of the 4 Virtual Machine Uplinks and add an additional Uplink to the IPStorage DvSwitch, but after reading more and more about iSCSI, it makes more sense to leave it at two. You can throw all the uplinks you want at iSCSI, but iSCSI is only going to talk up 1 physical link. The only way you get true load balancing across the links is by going to an individual datastore and setting the load balancing policy as Round Robin. In addition, multiple iSCSI uplinks really only play when you have multiple iSCSI targets.

 

 

This is in my opinion, the best possible configuration you can get by separating the storage network physically as well.

 

Let me know if there are other configurations you would like to see done and I'll make them available.

Related Items

LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/blue.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/green.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/orange.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/purple.less