Many of you know I've spent the past few weeks putting together a VMware vSphere home lab build. I'm proud to say that I'm finally getting all the pieces put together. I'm also going to give you a parts list so you can re-create it. Luckily, I got a bunch of components on Black Friday (for those outside the US, it's the day after Thanksgiving when every single store is on sale, including Newegg.com).
One goal of this was to retire my old "gaming" rig. It's in quotes because this thing is old now. This new lab should replace my desktop and have a central repository for all my music and photos. I was going back and forth on what to do. Should I build one massive box and do a nested vSphere environment like the vTARDIS or do I build 2 machines and have a NAS serve up storage. The latter is a bit more expensive and I'm doing this on a budget of $2000. After getting input from @vTexan, @j_nash, @Phaelon74, @ericsiebert, @ChrisDearden, @Virtual_Vic, @jasemccarty, and many more, I decided to go with building 2 machines and having a NAS serve up storage. I have to say thanks to my sponsors Veeam, Train Signal, VKernel, and PHD Virtual for helping me with this opportunity.
Since I'm building 2 boxes, I need to go green on everything. The last thing I want are machines sitting around using 350-600watts of power. My thought process was to build 2 machines that contain no drives, not even a CD drive, and boot from SD or USB. Also, many NAS devices out there by QNAP, Synology, and Thecus are able to serve up storage and do it with very low wattage. I also needed a managed gigabit switch to handle all the traffic. I wanted something with 16-24 ports that was also fanless to help cut down on energy.
After coming across Didier Pironet's (@dpironet) article called One Of The Most Powerful Shuttle Barebone For My VMware Home Lab it gave me a great idea. Shuttle PCs are known for their small form factor and awesome performance. I didn't have the money to buy the same Shuttle PC as Didier, so I went with the lower model. I went with the Shuttle Computer SH55-J2-BK-V1 that could accommodate an i3, i5, or i7 processor. In addition, it comes with a 300 watt 80 PLUS Bronze Power Supply to help keep energy costs down. Perfect!
Time to start purchasing! Here is a breakdown on everything purchased.
I finally get everything in and start unboxing all of it, what a beautiful sight.
First thing is first. I wanted to test the machine so I knew the Shuttle would recognize the CPU and RAM. I set the CPU into it's socket, greased it up with some thermal compound, seated all 16GB of RAM, fired it up....^(%#!! Nothing is on the screen, just complete blackness. Yet, my USB keyboard is able to light up with the Num Lock key, so I figured there was an incompatibility somewhere. I did every different kind of configuration with the RAM and even reseated the CPU and still nothing on the screen. I called shuttle support, and I learned a new thing about CPUs. Since this barebones model is usually geared towards HTPCs, it needed to have a processor with graphics support to use the onboard VGA. Come to find out, there are no LGA1156 Quad Core processors with built in Graphics Support. She did tell me that I could continue to use my i5 760 if I put in a PCI-e graphics card. Unfortunately, there is only 1 PCI-e and 1 PCI slot in these shuttle barebone systems, so where will I put my PCI-e NIC? I don't have any spare PCI-e graphics cards laying around, but luckily I had 2 dual monitor PCI graphics. I plugged in my 128MB GeForce FX 5500 PCI card and we now had a BIOS screen! I also tested my Diablotek ATI Radeon 9000 128MB DDR PCI card and that worked as well. The bad thing about these two cards is that they end up being the loudest device in the system. Once the Hosts were completely configured, the video cards were removed to cut down on the noise and energy levels.
Up next was testing the PCI-e 5709 Broadcom NIC. I wanted this NIC because it's on the vSphere HCL. I plugged that into the system along with the PCI graphics card and had no trouble at all, both 1GB ports were recognized immediately. I grabbed both of these off eBay in 2 different auctions. I was lucky enough to get 1 for $35, while the other cost me $85. During vSphere configuration you will have to follow VMware KB 1025644 to get iSCSI properly working on these NICs.
Now it's time for some ESXi goodness. Since these systems don't have a DVD/CD drive, I used the ddImage and WinImage route to copy that to my SD card. The SD card read correctly on my MBP with having 4 different partitions, I stuck that into my shuttle and nothing happened. I went into the BIOS and there is nothing that exists to make the SD card bootable, even after a BIOS flash. I found a 1GB USB stick laying around and put the ESXi image on that. Booted to ESXi without any problems at all. I was happy at this point, but kind of ticked that I can't boot to a SD card. I cut my losses and bought 2 Kingston 4GB USB 2.0 Flash Drives to boot ESXi.
Now that we have ESXi running, there is an easy hack to get the on-board 1GB Realtek 8111E NIC to function as well. Plug your USB key into a computer and go to the partition Hypervisor 1. You will see the oem.tgz file at this screen. Download RTL8111_8168-8.018.00_P55_integr_SATA_Ctrl.(AHCI).oem.tgz from vm-help.com and rename it "oem.tgz". Replace the oem.tgz with this new file, stick the USB key back into your server and now you will have an additional NIC. During my testing I found out that this NIC driver has a lot of set backs. Currently, setting up as a vMotion ends up stuck at 10% and failing, and when set to a VM Network NIC, there is really strange behavior. A VM connected to this NIC can ping anything on the network and ping google. Yet, if you try to browse to google.com, it time's out. If the VM is assigned to a Broadcom NIC, then it functions as should. I'm assuming it's a driver issue and will have to investigate later. As for now, all traffic runs on the Broadcom NICs
My current home network didn't have gigabit connections, so I decided to upgrade. I ordered the TP-Link TL-WR1043ND Gigabit Routerbecause of the good reviews on NewEgg and it had a pretty great price point. I flashed it with DD-WRT to give myself intervlan routing and ability to segment my home network from the lab environment. That's where I spent Christmas Eve completely perplexed. After doing more research, I found out that only a few gigabit routers support VLANs at the moment. I tried OpenWRT as well because I saw some reviews of VLANs working on there but I couldn't figure it out. I gave up and went to the basement and grabbed my Buffalo WHR-G54S and which was already flashed to DD-WRT v24SP1. After some tinkering around, I had intervlan working as well as multiple dhcp scopes. After this initial proof of concept, I ordered a Linksys WRT320N Dual-Band Gigabit Router because of it's 802.1q capability. I like this option much better than running Vyatta or any other soft router because now I have an all in one solution for my router, wireless, dhcp, vlans , etc. Here is how I configured VLANs and multiple DHCP scopes on the WRT320N:
After this step you must do some cli work to continue:
Now we can go back to the GUI and continue configuration.
Another option is to get a NetGear WNR3500L like Sean Crookston did or a Linksys WRT610N. Note, I changed the primary DNS for DHCP to an internal DNS server after I got the lab up and running.
To have managed gigabit connections for my home lab, I snagged a HP ProCurve 1810-8G off eBay, brand new, for ~$90 shipped, saving about $55. This switch has PoE (which means nothing in the lab), flow control, vlans, 802.1q trunking, and it's fanless so it puts out 0db and rated to use only 15watts under a full load. I battled between going for the 24 port or the 8 port, but 8 will serve just fine. 6 ports for my ESXi hosts, 1 port for the Synology DS411+, and 1 port to connect back to the router. I also had to pick up some CAT6 cables from monoprice.com to tie it all together for about $20. The HP ProCurve 1810-8G has a green feature that will turn off all LEDs to reduce energy consumption as well. Every little bit counts.
The Synology DS411+ has a lot of features to use with VMware such as NFS, iSCSI, and Thin Provisioning with iSCSI.
Since I'm going to building a brand new lab and will have to use a lot of different versions of Windows and Windows Server, I figured it would be easier to go 100% legit and purchase a Microsoft TechNet Professional Subscription. I was able to get this for $277 using the coupon code TNITQ413. Not only does a TechNet subscription give you OS's but also SQL, Office, and lots more. It's well worth the money for a 1 year subscription.
I've now started building on my Windows AD servers and getting vSphere up and fully functional. If you take away the money I spent on the SD cards, the P440 meter reader, CAT6 cables, and TechNet, my total vSphere Home Lab costs comes to a grand total of.... $2091. Not bad if I must say myself.
So why are they The Green Machines?
I first tested with just my old gaming rig. It's a custom built PC I originally built in 2003, but have been making steady upgrades and fixes along the way. Current Hardware Specs:
Windows XP SP3
AMD Athlon XP 2600+(yeah it's that old)
1.5 GB of RAM
Antec 900 Case with 3 fans, 2w with blue LEDs
Thermaltake 430 Watt PSU
ATI All-In-Wonder RADEON 8500 video card
Lite-On DVD R/W
Dual 10/100 Ethernet Card
Creative Audigy Sound Card (can't remember which kind)
Rosewill RAID Card.
2 250GB Western Digital Drives in RAID 0
1 Maxtor 320GB
1 Western Digital 120GB
Here is a (poor quality) video of testing power usage from this old rig. If you don't want to watch the videos, there are photos as well.
Average use around 165 watts:
The New vSphere Lab includes all of the following plugged into this P440:
APC Battery Backup for all equipment
2 ShuttleXPCs w/ Core i5 760 and 16GB of RAM. No Video Card, No Hard Drive, boot from USB
Synology DS411+ w/ 4 1TB Western Digital Caviar Black Hard Drives.
Linksys/Cisco WRT320N Router
Belkin 8 outlet surge protector
HP ProCurve 1810-8G
When the hosts and NAS are powered off (not unplugged), along with the APC UPS, Router and Switch consume around 60 watts.
Once the Synology NAS and Shuttle Hosts are turned on you will see spikes up to 220 watts, but with an average use around 180-200watts.
For adding a whole bunch of new equipment to the lab, that's a huge boost in energy savings. The first test was just 1 stand-alone PC. Not including the UPS, router, or switch and it consumed an average of 165 watts. Removing that PC from the picture and testing the new vSphere home lab shows how much energy efficient devices really save. I was able to have a UPS, Router, Switch, NAS, 2 Hosts, and 3 VMs for about 30 more watts of energy.
I know what you are probably saying, "Holy crap, he didn't even use a single SSD!" That's true, I didn't. I'll do some performance testing and create a new post. From the feel of it all, everything runs quite fast and I haven't had a hiccup yet. Likewise, I am looking to get a zero image thin client to do some View testing and to have a Windows Desktop for the vSphere client instead of having to always have a fusion window open.
I originally bought my i5 760 processors not really caring about being FT compatible. Luckily, these processors just barely squeak in to the FT compatibility list as part of the Clarksdale series and they do in fact run FT.