I've had my fair share of problems with getting this technology working correctly. So let's get going. I have to say thanks to Sachin Thakkar over at VMware for helping me get this puppy rolling.
First thing is first, multicast. I'm not a CCIE, heck I'm not even a very good CCNA anymore. But you need to understand that multicast plays a major role within VXLAN. To simplify things, multicast is like a TV broadcast. There is a provider airing a station and users tune in and out at various times to get the content they want. Start reading about IP Multicast. It will probably take you a week to fully understand it, but enough jibber jabber, let's configure.
For my purposes, I'm lucky enough to be able to test this functionality with a Vblock 700MX connected to a pair Nexus 7010 switches (hell of a lab).
One HUGE gotcha here is that it is best to configure most of these parts before you add vShield Manager to vCloud Director during the setup.
There are some pre-requisites you need to get from your network team for VXLAN functionality:
1 Layer 3 VLAN configured with the following parameters:
IGMP Snooping enabled
IGMP Querier Address assigned
Default gateway for this VLAN
DHCP Helper Address to your DHCP server
DHCP scope for this VLAN
Here is a sample of the configuration for your router (in my case, 7010 switches).
On the Nexus 7010:
ip pim ssm range 126.96.36.199/8
vlan configuration 2101
ip igmp snooping querier 10.2.101.3
ip dhcp snooping
# This is an interface for future use if we need to stretch access this VLAN from elsewhere
interface vlan 2101
ip address 10.2.101.1 255.255.255.0
Before I begin my next sections, I ended up creating a dvSwitch Portgroup with VLAN 2101 (My VXLAN) and placing a VM on there to make sure it was correctly pulling a DHCP address. If the VXLAN VTEPs can't pull a DHCP address, then it just starts breaking.
You need to know your NIC teaming policy ahead of time. This can be a bit tricky. If you want to use LACP or Etherchannel, it must be supported and configured by your switch vendor. In my case, I'm using a Vblock where the UCS technology does support it but it would change the VCE Logical Configuration process. I had a conversation with Justin Guidroz (@juicelsu009) about this and here is what he had to say: In theory, you could create one uplink port channel out of the UCS FIs (instead of 2 as VCE has it today), then we could possibly use LACP on the DVS as all the traffic leaving the FI would be out the same port channel. Where as today, traffic leaving FI A is in a different port channel from FI B. What about the 1000v since it supports LACP? The uplinks are in a port channel, we used mac based pinning and would change the Logical Configuration process.
To choose the path of least resistance, I went with a Failover option. This option will still work very well because if I'm using NIOC and all my other portgroups are set to Route Based on Physical NIC Load, then vSphere is smart enough to move traffic on any uplink it sees free resources. Win-Win. No change to any logical or physical implementation and vSphere knows how to handle all the traffic.
The next thing we will want to do is setup our vShield Manager (or vCloud Networking and Security Manager). Before attempting to do anything with vCloud Director, we need to register the vShield Manager with vCenter and SSO. Pretty Straightforward.
After vShield Manager is registered, you will see datacenters and hosts in the main view. Click on the datacenter where VXLAN will be going. Click on the Network Virtualization Tab, then click Preparation Link, then click on the Segment ID Button. On the far right hand side there is another button that says Edit, click there and a pop-up box will open. Enter in the Segment ID Pool and Multicast Address range (usually specified by your network administrator). I stole these values from punchingclouds.com. I honestly am a complete n00b to this stuff so I don't know what can or can't be used. In addition, if you are setting up VXLAN between multiple clouds, I don't even know if you can use the same overlapping values or if they must be unique.
Now click on the Connectivity button. This is where will we see if our efforts succeed or fail. Click the Edit button on the far right side and a pop-up box will show. Click the "Use" check box and enter your VXLAN, in my case it's 2101. Click next. WARNING: You cannot change your VLAN after this has been configured, it's a one time thing. If you have to change, you will have to unprepare the hosts and reboot them.
On the next section, you will select your NIC teaming policy. Remember that section above? I am choosing Fail over. WARNING: You cannot change your teaming policy after this has been configured, it's a one time thing. If you have to change, you will have to unprepare the hosts and reboot them.
You will then start seeing VTEPS being created inside of vSphere
And the vShield Manager appliance will be in the Ready State
Now when you after adding vShield and vCenter to your vCD instance and configuring your first Provider vDC, you will get all greens below.