BSA 728x90 Center Banner

Fixing Storage Alignment for Virtual Machines

 

In a series of blog posts, I'm going to be covering some of the basics that people just happen to overlook. Let's forget about cloud and look back to the real reason why we started virtualizing in the first place, the virtual machine. The virtual machine is key component to cloud, but having machines that are lean and clean allow greater density and better performance.

 

Last week I touched on P2V Clean-up and aligning your storage is very much a part of the P2V clean-up process.

 

There are plenty of great blog post articles out there about storage alignment, so I'm not going to dive much into that because we don't need to beat that horse to death much longer. One of the best easy to read and beginner articles I've come across is Best Practice for File System Alignment in Virtual Environments by NetApp and My #1 Issue with VMware ESXi Today by Aaron Delp. Why all the fuss? Misaligned VMs negatively impact performance resulting in increased IO and search time. Everyone is aware of the alignment issues, but we're not 100% sure on how to fix these issues.

 

There are 3 layers when dealing in a virtual alignment. The Array, VMFS, and the VM.

Read more: Fixing Storage Alignment for Virtual Machines

Changing HAL for Uniprocessor and Multiprocessor VMs

In a series of blog posts, I'm going to be covering some of the basics that people just happen to overlook. Let's forget about cloud and look back to the real reason why we started virtualizing in the first place, the virtual machine. The virtual machine is key component to cloud, but having machines that are lean and clean allow greater density and better performance.

 

Last week I touched on P2V Clean-up and the 1vCPU goal. This post continues on with the 1vCPU goal and addresses the issues of moving between single processor and multi-processor VMs.

 

As I said last week, we should be striving to keep our virtual environment lean and clean by moving to a standard 1vCPU for all VMs that don't have multi-threaded applications. So what happens if we P2V a dual core physical machine but it only needs 1vCPU?

 

HAL, otherwise known as Hardware Abstraction Layer, is the software that talks directly to the hardware so the applications don't have to. The HAL provides routines that enable a single device driver to support a device on different hardware platforms, making device driver development much easier. It hides hardware dependent details such as I/O interfaces, interrupt controllers, and multiprocessor communication mechanisms. Applications and device drivers are no longer allowed to deal with hardware directly and must make calls to HAL routines to determine hardware specific information. Thus, through the filter provided by the HAL, different hardware configurations can be accessed in the same manner. Article ID: 99588

Read more: Changing HAL for Uniprocessor and Multiprocessor VMs

P2V Cleanup Game

In a series of blog posts, I'm going to be covering some of the basics that people just happen to overlook. Let's forget about cloud and look back to the real reason why we started virtualizing in the first place, the virtual machine. The virtual machine is key component to cloud, but having machines that are lean and clean allow greater density and better performance.

 

We all began a journey with Physical to Virtual (P2V) migrations. Converging infrastructure to run multiple virtual machines on top of beefy physical hardware. Along with this comes what I call The P2V Dilemma.

 

A physical server needs drivers to talk to it's hardware. A virtualized server, in essence, does the same thing. VMware presents new hardware to allow the virtual machine to talk to it's hypervisor. Think of the driver issue in a P2V migration like removing a video card and installing a new one. The drivers and hidden hardware are still there just in case you feel like popping the video card back in. A P2V will still have all of the old hardware in the virtual machine. Don't forget about services that physical servers used, namely HP server management services.

Read more: P2V Cleanup Game

SolarWinds Orion APM Preview - Call for Testers

Solarwinds is looking for beta testers for a new product they are launching. Read Below.

 


Solarwinds is preparing to enter the App-Server Management market with the launch of Orion APM in early Q1 2011. Over the next few weeks, SolarWinds is offering the opportunity to preview the product, before it is available to everyone else. They're looking for a few good sys admins and IT managers to review the new Orion Application Performance Monitor... pre-launch. The process is pretty simple… There are few qualifications that folks need to meet, but if you get yourself an invite the company will give you some free software, a flip cam, and would just ask you to talk honestly about your experience trying out the product.  You can get more info here


Read more: SolarWinds Orion APM Preview - Call for Testers

1vCPU, The Standard Goal

In a series of blog posts, I'm going to be covering some of the basics that people just happen to overlook. Let's forget about cloud and look back to the real reason why we started virtualizing in the first place, the virtual machine. The virtual machine is key component to cloud, but having machines that are lean and clean allow greater density and better performance.

 

I've gone into plenty of environments and every single VM is configured with 2vCPUs or more. Why is this such a problem? It all comes down to CPU cycles and ready time.

 

We as VMware admins have a virtualize first policy, but even more so, we should have a 1 vCPU first policy. Setting new virtual machines to 1 vCPU will result in an environment that won't be fighting for CPU cycles. A 2 socket server with quad core processors will give 8 cores of CPU on a host. A VM running on 1vCPU can use any one of those 8 cores whenever it's available. The "whenever it's available" is referred to as CPU Ready Time and can be viewed in ESXTOP. The ready time of a virtual machine, which is measured in milliseconds, is best described as the VM was ready to execute but had to wait for CPU resources. There in lies the problem with 2vCPU, 4vCPU and up VMs.

  • 1 vCPU VM - can execute it's CPU cycle when any 1 of the 8 cores
  • 2 vCPU VM - can only execute CPU cycle when 2 of the 8 cores are available
  • 4 vCPU VM - can only execute CPU cycle when 4 of the 8 cores are available
  • 8 vCPU VM - can only execute CPU cycle when all 8 cores are available

 

Read more: 1vCPU, The Standard Goal

Free 101 Level Video Training from Cisco

As I was sifting around, I came across some good public content from Cisco that many can benefit from. Alot of this content is really basic 101 level stuff, maybe even some 102 level, but if you need to get the basics down on virtualization, storage, networking, etc. this is a great place to start. To watch these videos, you must have a brighttalk.com account.

 

Course Title Duration
Server Virtualization Basics 1 hour
Server Virtualization Advanced 1 hour
Networking 101 1 hour
Unifying Virtualization Infrastructure 1 hour
Nexus 1000V Basics 1 hour
UCS Basics 1 hour
VMware View4 1 hour

 

Read more: Free 101 Level Video Training from Cisco

Video: VMworld Talking About Xangati

Is the ride over yet? NEVER! Xangati has posted a quick 5 minute youtube video of David Davis and myself from our VMworld presentation on the Top 10 Free Tools for vSphere Management. There is more to be said about the product than what's in the video so go check it out!

 

Read more: Video: VMworld Talking About Xangati

Inherent Security Features of ESX/ESXi

On my flight home from Boston, I started reading some of my VCAP Landing Page materials and found some cool things I never knew existed. Everything listed can be found in the ESX Configuration Guide and ESXi Configuration Guide for vSphere 4.1. These are all features that revolve about the security of vSphere hosts. The features don't have to be enabled, but they are all iinnherent to the operating system, giving vSphere an enhanced security profile.

 

Memory Hardening - The ESX/ESXi kernel, user-mode applications, and executable components such as drivers and libraries are located at random, non-predictable memory addresses. Combined with the non-executable memory protections made available by microprocessors, this provides protection that makes it difficult for malicious code to use memory exploits to take advantage of vulnerabilities.

 

Kernel Module Integrity - Digital signing ensures the integrity and authenticity of modules, drivers and applications as they are loaded by the VMkernel. Module signing allows ESXi to identify the providers of modules, drivers, or applications and whether they are VMware-certified.

 

Read more: Inherent Security Features of ESX/ESXi

Finding the True Value in Vblock

Yesterday the twitterverse was filled with Vblock talk from some of the top people in the industry. That coupled with me getting out of Vblock week is what really sparked this post.

 

Vblock week was a gathering of #Team04, which included a handful of new vSpecialists but a majority crowd of vArchitects. The week was filled with talks from our fearless leader, Trey Layton, a lot of UCS cramming from Scott Lowe, getting our hands on UIM, getting more familiar with VMware View, knowing the offerings from RSA, and deep diving into Symmetrix and VMAX from David Robertson. This post isn’t geared at the technical offerings, the goal is to make you see the “true value” in Vblock.

 

A Vblock is a best of breed technologies union that enables a consumption of resources which we call cloud. If you are the techie person that wants to pick apart a Vblock and get into RAID levels or UCS blade configs, that’s awesome, but that’s not what Vblock is about.  Picture the Vblock as a whole, not a combination of products, but a single SKU that can provide a cloud solution. Are you getting the picture yet? Let’s dive a bit deeper.

 

Read more: Finding the True Value in Vblock

Page 23 of 32

Related Items

Related Tags