LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/typography.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/template.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/responsive.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/k2.less

Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

More NFS v4 with vSphere & Chinwagging with Mike Laverick

I hope everyone is familiar with Mike Laverick's Podcast series called The Chinwag. Mike approached me a few weeks ago about being on the wag and I was more than happy to oblige. We discussed some topics relating to Free vSphere Tools, NFSv4, and VMworld. Don't know why my eyes look so squinty... must be because it was in the early AM and I was looking down into a camera :)

 

Chinwag with Mike… Kenny Coleman [Episode 23]

 

Here are some talking points I had that we didn't have enough time to chat about as an addition to my Why vSphere Needs NFSv4 post:

Some of the features of NFS from version 3 to version 4 are going to give VMware environments a big boost.

NFSv4 now has this thing called the stateful protocol. It basically gives a client temporary access to a server. Instead of accessing a file and having a stateless lock on it, the client must check back in with the server every 45 seconds or the file will become unlocked and the next client can access it. For VMware deployments, this could mean less of a chance of corrupted VMs if a server or NAS were to fail and kept locks on a file.



We also have file delegation. File delegation allows a client to talk to a NAS device and open up a write session. At the same time, multiple other clients can have read-only sessions on that same exact file. This could potentially have a benefit for VMware backups. Currently we have to take a snapshot and quiesce the filesystem to everything is written to disk, then we take that read-only snapshot and copy it somewhere else. What if we could quiesce the filesystem, but then remove the need for snapshots. If we can have a read-only session on that current file, we can easily make a copy of it. File delegation also allows you to create this tunnel to the file. Within this tunnel you can keep sending requests without the need to constantly close it and open it back up. Therefor heavily cutting down on network traffic.

In addition to that NFSv4.1 added pNFS or parallel NFS capability. It’s basically like VMware’s HA with MPIO. You have a cluster of NFS servers, a client can make a request out of different interfaces, and then data can be sent back to the client through multiple parallel network connections. Meanwhile keeping this quote unquote tunnel up. Having these clustered NFS servers also gives you higher availability if a particular server goes down, then a new route is picked up and created. This could be good for VMware because if you lose connection to a VM because of a failed server, there is that 45 second window for the stateful protocol, so you’re looking at around a minute of downtime for the new path to the storage.

One thing I forgot to mention on my blog post that I got reamed for on the comments was the enhancements of security. Currently NFS is pretty weak. You set your ESX host as the IP to have root access to a datastore. Pretty insecure seeing as how easy it is to spoof an IP. NFSv4 makes better use of Kerberos so we can start having CIFS like security on these shares and also has ACLs.

NFSv4 has been around since 2000. V4.1 was released earlier this year, so hopefully we will see something in the works. Really MPIO is what I want. With NFS, it doesn’t matter how many NICs you team together because VMware will only send the TCP connection data up 1 particular link. The only way you can boost that minimal performance is using etherchannel or making the jump to 10GbE.

Related Items

Related Tags

LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/blue.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/green.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/orange.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/purple.less