There was a discussion happening on twitter yesterday by Douglas A Brown (if you don't know him, you should. He's one of the very few professional bloggers out there. He founded www.dabacc.com) and David Davis that was around multiple hypervisors in the datacenter.
David Davis had a twitter poll going that asks "How do you feel about multi-hypervisor in the datacenter?". Unfortunately there were only 16 entries, but the results were split down the middle. Douglas chimed in with a respectable opinion:
@douglasabrown: "@davidmdavis multi-hypervisor datacenters are the future no matter if we like it or not..."
This sparked an interest in my head and i hope it sparks one in yours as well.
With any technology implementation there are hard and soft costs to evaluate. The hard costs are pretty simple to see. We know that vSphere is the rolex of the virtualization world. Hyper-V comes in second as our timex. KVM can be considered our sun dial (because its free). All three products will tell time accurately, but the costs come with features, functions, and more options. That's the hard cost that most CXO's will see. So why not go the completely free and open-source route of KVM + OpenStack? That's where the soft costs come in to play.
90% of us got our start in the virtualization world with ESXi. It's been the go-to product for mission critical applications because of it's consistency, reliability, and support. As more hypervisors make an entrance, there is an argument to say that the "hypervisor is a commodity". Sure, it's easy to snuff at that statement and agree, but let's examine the soft costs.
The soft costs need to be evaluated for ANY new technology that makes it way into your datacenter. I'm sorry to tell you, there is not a single product on the market that is "set it and forget it" when it comes to IT. There is going to be continual monitoring and maintenance for patches, upgrades etc. Data is never at rest and only continues to grow. Someone needs to be able manage those infrastructure to add new capacity.
- Staff
- With any new technology there will be a curve of time it takes for your staff to become well versed. Does this new staff have the free time to go to training? What kind of time commitment does it take to start learning new technology? Don't get me wrong, we should always be bettering ourselves, but time is usually stretched pretty thin for most of us.
- If the current staff is stretched too thin to learn a new technology, then you have to hire a new team of people manage this kit. Would it have been easier to scale up the existing technology and have your current team manage it? I know adding in another cluster of vSphere hosts is going to be easier for a team well versed in vSphere to manage than a completely new Hyper-V system.
- Do you end up outsourcing or bring in expensive help?
- The ecosystem
- This is often an overlooked piece. Have you considered about the monitoring applications for new hypervisors? Sorry, vCenter isn't going to manage KVM. Perhaps you had another vSphere monitoring product like VMTurbo, Xangati, or vCOPS. Yep, screwed again. Time to make investments in multiple products so you can monitor all these different hypervisors. So how many different windows are you looking at to monitor your infrastructure?
- Backup. Oh no! Did someone say the B word? Same story as the bullet point before this. How many different backup products do you need to backup virtual machines on different hypervisors? Are you going to re-wind the clocks back to 2007 and start putting back up agents on every VM again?
- DR. Crap, I through you for a loop didn't I? We all know, love, and trust SRM. What products or processes do you need to have DR plans for those other VMs on different hypervisors? DR takes into account a lot more than technology, it's truly about Business Continuity (BC). BC is about the processes and procedures to meet SLAs. How many different processes or technologies does it take into account to have a true DR/BC solution for your company? Talk about a mess of paper work and documentation.
The evolution of cloud brings an interesting scenario to the table. If you are focused higher in the stack, does the hypervisor even matter? I don't know about you, but I have yet to see a product that can monitor, manage, deploy, and move workloads between clouds using different hypervisors. So it seems to me that your choice in hypervisors is still critical. Again, this is all related to IaaS type offerings. If you're smart and are already developing software for PaaS or other new types of scale out applications, then this point becomes pretty moot. Back to topic, the ability to migrate workloads between clouds still isn't a full-proof. It requires downtime of that VM and maybe some re-configurations to throw it on a different hypervisor. As you begin the journey into this hybrid cloud scenario, the types of clouds you deploy to will still have an effect on your ability to monitor and manage those from as few windows as possible.
I still think multiple hypervisors will be in datacenters, but TODAY, it's doubtful you will be running production applications on all of those hypervisors. I would envision lower tier hypervisors being used for Dev/Test while tier 1 for production. But then you have to figure out how to move things from Dev/Test to the production infrastructure. Again, more hoops to jump through and time spent. Your time is a valuable resource. Just remember about all the things that need to be accounted for besides the cost of they hypervisor alone.
In the future, there will be multiple hypervisors when someone figures out how to monitor, manage, deploy, and move workloads between clouds in a single window. It's a tough ask. Or maybe all hypervisors will go open-source. There will be a single hypervisor and companies will differentiate themselves on the ecosystem of products and support. Until then, there are going to be silos.
I welcome any thoughts and opinions.