LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/typography.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/template.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/responsive.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/k2.less

Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

Why vSphere Needs NFSv4

If you are familiar with my blog, you'll know that I'm a huge advocate of the NFS protocol with VMware. I firmly believe that over the next few years, ethernet storage will be the front-runner of VMware deployments. Most of the people that I talk to that have a Fiber-Channel (FC) based environment are in large enterprises that made the switch to VMware but used their existing FC environment. Which is great, but now is the time everyone is starting to virtualize their whole environment and money talks when it comes to scalability. I won't go into Ethernet vs FC because there is boat loads of information already out there, but let's talk about NFS. NFS is that guy sitting in the corner that doesn't get much attention, but NFS is making headway into the marketplace. VMware must be taking notice because of the ability to see NFS stats in ESXTOP in the latest vSphere Update.

 

 

 

There are a few primary reasons why I like to use NFS over iSCSI:

  • Simplicity: Setting up NFS is about as easy as it comes. Log into your SAN, create a new share, give your ESX server root access, go to your host, enable NFS client, add the NFS share from the vSphere client and you're done. Unlike iSCSI, you don't have to worry about World-Wide Names (WWN), forgetting to enable the adapter or formatting the datastore. Or if you're in a FC world, no extra equipment to deal with.
  • Block vs File: I'm not going to claim to be a storage expert because I'm not. But when writing a file to disk, depending upon the block size you use, you are going to end up using block space even if you have a very tiny 1k file. Where as if you have NFS, you are file-level, re-claiming that space.
  • Default Thin Provisioned: Don't worry about configuring Thin Provisiong or wondering if your Storage provider supports it, it's thin provisioned out of the box.
  • Scalable: Grow and shrink datastores on the fly

 

 

NFS over iSCSI Related to Vendors:

  • NetApp: It's a well-known fact NetApp touts the NFS protocol as it's cream of the crop. Performance is slightly lower than compared to iSCSI, but NFS gives better de-dup ratios.
  • EMC: At a previous job, we had a Celerra NS42 and tried to replicate to a Celerra NS352. We found out that to replicate a 1TB LUN with iSCSI, you need to have 4.4x the amount of space. You have to have your 1TB LUN sitting inside a 2.2TB Filesystem (because to replicate an iSCSI LUN it uses 120% overhead), then you also need 2.2TB sitting on the receiving end. Talk about lots of wasted and expensive storage. We made the decision to migrate to NFS because the filesystem uses about 5-10% overhead, saving tons of space. Here is an article I wrote in Nov. 2008 The Proper Way To Mount NFS Filesystems with EMC Celerra

 

 

Improvements from NFSv3 to v4 and how it can relate to VMware:

  • The Stateful Protocol: Allowing a client to gain exclusive access to a file on the NFS datastore on a "lease" or just a temporary basis. This allows it to lock, read, write, and unlock without ever having to worry about corrupting VMDKs or a failed NFS server keeping locks on a file.
  • File Delegation: The client accesses the file on the NAS and can modify that file within it's own cache without sending any network requests. Reducing the amount of traffic and boosting overall performance.
  • Aggregated Remote Procedure Protocol: the protocol can send multiple RPCs such as OPEN, WRITE and READ, within one request cutting down on network traffic.
  • Psuedo-MPIO: The client establishes a session to the NAS, and the client can make multiple TCP connections to the server going out different interfaces and arriving at different interfaces on the NAS. Finally giving NFS user's a form of MPIO. MPIO is already available for iSCSI and FC users, but give the NFS people a shot.
  • High*er* Availability: the pNFS concept is kind of like VMware's HA. Having multiple clustered NFS servers that can handle all NFS requests. If a server fails, the connection is re-routed to another NFS server and since locking mechanisms within the protocol ask the client to "check-in" the file every 45 seconds, you are looking about 1 minute of downtime for a VM before it re-establishes the connection.

 

 

Some good blogs to read:

NFSv4 - Security, High Availability, Internationalization and Performance (5 part series)

The Maturing of High Availability and Reliability Features in NFS v4 are Changing the Role of NAS in the Data Center

ESXTOP displays statistics for NFS Datastores – Video

Related Items

Related Tags

LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/blue.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/green.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/orange.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/purple.less