In a series of blog posts, I'm going to be covering some of the basics that people just happen to overlook. Let's forget about cloud and look back to the real reason why we started virtualizing in the first place, the virtual machine. The virtual machine is key component to cloud, but having machines that are lean and clean allow greater density and better performance.
Today I'm going to be examining placement the placement of virtual machines that need to talk to each other and how to configure their DRS (Distributed Resource Scheduler) settings. The great thing about virtualization and DRS is that virtual machines can be on any number of ESX(i) hosts and we never have to think about it. Have you ever thought about virtual machines that are tied to a specific application and constantly need to talk to each other? You can greatly improve the performance of your application by placing the VMs that contain an Application on the same host. Let's examine Figure 1.
VM-a and VM-b are both located on the same subnet. VM-a is located on ESX1 and VM-b is located on ESX2. Whenever VM-a needs to talk to VM-b, it must exit ESX1, travel to the switch, and head downstream to ESX2. Likewise a reply has to take the same motion.
iperf was used to do some basic network performance testing in my home lab. iperf is also bundled in my VM Advanced ISO with detailed instructions on performing your own tests. The test performed did both send and receive over a 3 minute span and averaged around 330 MB/sec. Not bad.
Up next was to vMotion VM-b over to ESX1 and perform the same test. When VM-a and VM-b are on the same ESX host and subnet, their communication never has to traverse the network. All Layer2 traffic between the VMs is handled locally on the ESX host as seen in Figure 2.
Again, using iperf, I saw some amazing results. By performing the same test, the throughput almost quadrupled. Average Bandwidth was 1.53 GB/sec!!
Keep this in mind for applications that span multiple VMs, this huge increase in network throughput can dramatically impact application performance if there is cross communication that needs to constantly happen (think anything with a SQL backend). Remember, to obtain this kind of performance the VMs must be on the SAME subnet. If VMs are on the same ESX host, yet on different subnet, the ESX host won't do Layer 3 routing. The the packet must leave the ESX host, be routed, and traverse back into the ESX host.
The one thing to worry about now is DRS. Without specific VM settings, DRS could possibly break up the grouping. To make sure the VMs stay together even during a vMotion, certain settings need to be set.
Edit the Settings of the Cluster
Go to DRS -> Rules -> Click Add
Give the rule and name and verify the type is set to "Keep Virtual Machines Together". Click on Add and select the VMs to keep on the same ESX host.
Verify that the box has been ticked and click OK.
If some VMs were not sitting on the same ESX host, the DRS rules will now kick in and begin vMotions to keep them all together.