|[Update 10/2 FIXED!] Synology Users DO NOT Upgrade To vSphere 5.1|
|Written by Kendrick Coleman|
|Tuesday, 02 October 2012 11:54|
I totally meant to get this out a few days ago but am now finally catching up. During the beta testing period, I had all kinds of problems getting 5.1 working. Come to find out, Synology DSM is not compatible with vSphere 5.1 with BOTH iSCSI and NFS. I HAVE NOT had a chance to test the GA version, but build 732143 which is a beta build along with the beta of DSM 4.1, experienced this issue. This has also been confirmed by Jason Nash and Jeremiah Dooley, so I know it's not just me.
It's possible to mount both NFS and iSCSI datastores, but when you try to build or power on a VM, everything begins to halt when it needs to write to disk. If everything runs in memory (like mounting an ISO and beginning an OS installation) it looks like it's working, but when it comes to install the OS to disk, it begins to crawl. I checked the vmkernel log and here is what is shown. You can see that after I power on the VM, it gets binded to a port but when it tries to read from disk, there are all sorts of errors:
I've made the Synology team well aware of this bug over a month ago. They have been lacking to get a 5.1 environment up and running to test, but if you want to help fan the flame, please open a Synology Support ticket so they know this is actually critical.
Stay Tuned to this site and I will let you know when it is safe to upgrade to vSphere 5.1. I plan to test the GA version of both 5.1 and DSM 4.1 this Friday 9/14.
Updated 9/12/2012 07:45AM EST
Thanks to two of my readers below (Steve and J) in the comment section, it seems that I don't need to worry about trying the GA version of both vSphere 5.1 and DSM 4.1. It's completely broke. Now it's up to YOU, Synology owners, to take a stance. Submit a Synology Support ticket if you are a Synology owner. The more tickets they receive, the higher priority this will receive. If you would like, you can mention Support #137146, which was submitted by me on 8/3/12 detailing this severe bug.
Updated 9/13/2012 11:28PM EST
I received an email from Synology and they are testing 5.1 (that's the good news). Here is the email:
Our engineer was not able to reproduce the iSCSI issue in ESX 5.1-799733 using both DS1512+ and RS3412RPxs running 4.1-2636. However, we were able to reproduce the NFS issue. Our developers are working hard to resolve this specific issue with VMWARE and NFS. However, please note that we are currently not certified to operate with VMWARE over NFS protocol so we do not recommend to use NFS in a production ESX environment.
If you are able to reproduce this issue over iSCSI using ESX 5.1-799733, please provide the reproduce steps and environment details.
So I went ahead and tried testing. I had another USB stick sitting around so I loaded ESXi 5.1 build 799733 and booted up. On the Synology, I created a new iSCSI target and associated 2 LUNs with it. I ran some tests and I think their engineer is running into some false positives. It may seem like it's working, but it's not. The performance is severely degraded. I have 2 servers, 1 running 5.1, the other running 5.0 Update 1. I created a new virtual machine on each server connecting them both to different iSCSI luns. The test was to see which server would install Windows 2008 R2 first. I began by creating the VM on the 5.1 host first, mounted the ISO, then began the installation. About 3 minutes later, I had the VM created on the 5.0 host and the installation started. After about 2 minutes later, the 5.0 host was further along in the installation.
If you have a DS1512+, please comment if it works for you or if it doesn't. Thank you!
Here is the vmkernel log from the host running vSphere 5.0 Update 1
Here is the vmkernel log from the host running vSphere 5.1
Update 9/18/2012 11:30am EST
More updates and comments are rolling in. Jason Nash was able to test his DS3612XS which is the high end system Synology sells (close to $4k). Synology thought their higher end models were supported, but according to Jason Nash
In addition, thanks to kfingerlos in the comments section, his 1512+ is down for the count.
Thanks everyone for your continued testing and support. Please remember to open up a support ticket if you haven't already.
Update 9/18/2012 3:00pm EST
I got an email from Synology that asked:
We found that 5.1 uses VAAI by default. If the target does not have VAAI enabled, then there will be performance issues. This option was introduced in DSM 4.1 and VAAI support has to be enabled when creating the LUN. Please try creating a new LUN, enable the VAAI option during creation, and please let me know if you experience the same issue."
I tried to do this and perhaps ran into some more bugs. There is a new dropdown box that enables or disables VAAI on the iSCSI Lun. What does this mean? Cool if we need to add new LUNs, but bad because we not have to migrate VMs from one LUN to another VAAI supported LUN because all my other LUNs that have been created previous of DSM 4.1 say "VMware VAAI Support....No".
When i go to add a new iSCSI LUN to an existing iSCSI target, the LUN never gets created. It goes through the entire process, then I click Apply and it says "saving", but then the new LUN is never added. Yet, I can add a new LUN when it's not associated with an existing iSCSI target. Another problem is that when that new LUN is created, I cannot delete that LUN.
If you see the same issues, please update your existing support ticket.
Update 9/19/2012 3:30pm EST
Synology is starting to send out patches to fix the NFS bug. Be aware, some of the patches they are sending out are MODEL SPECIFIC.
DS1512+, DS1812+ you will be getting a patch called cedarview_2636_nfsd.pat that will fix NFS
DS411+, DS1511+ you will be getting a patch called x64_2636_nfsd.pat
DS3611XS, DS3612SX you will be getting a different patch.
All other models please stay tuned or update in the comments section.
iSCSI is a different story. For iSCSI, you MUST create new LUNs and make sure the VAAI option is set to enabled (picture shown above) and present the new LUNs to your ESXi host to no longer get errors. LUNs created previous to DSM 4.1 CANNOT be migrated because of something they are doing with the VAAI flags (thanks to Mark Achtemichuk for sharing this). The workaround I'm suggesting and which should work is that you:
1. create a new iSCSI Target
2. create new iSCSI LUNs to attach present with that target
3. present those LUNsto your vSphere 5.0 Update 1 hosts
4. start performing Storage vMotions
5. Once everything has been moved over, remove the existing LUNs and iSCSI targets from vSphere, then from DSM 4.1. Upgrade a single ESXi host to 5.1 and test.
Be advised, with the current problems, it's going to be impossible to Storage vMotion a running vCenter VM running on 5.1 from one datastore to another without experiencing issues. Stay tuned for more updates
Update 9/19/2012 4:33pm EST
NFS Patches are being rolled out. If you want your patch, please open a support ticket as I am not yet able to link the patch for download. In addition, please keep in mind this still needs to change your iSCSI upgrade strategy.
Update 9/21/2012 10:04am EST **NFS CONFIRMED FIX**
As many of you are aware, if you have an open support ticket with Synology, there are patches available to fix the NFS issue. I have tested this with 5.1 and IT WORKS. I cannot distribute the NFS patch, Synology has asked that you need to open a support ticket to receive it because there are different patches for different models. So the great news is that NFS has been fixed. Others are reporting that iSCSI has been fixed. In my case, it's still broken. It's been reported to have iSCSI work with DSM 4.1 and ESXi 5.1, you have to create new VAAI LUNs. You CANNOT migrate old existing LUNs (see my advised walkthrough on the 9/19 update). I cannot mount a newly created iSCSI Lun to my ESXi 5.1 host. To reproduce my iSCSI issue here is what is happening.
In DSM 4.1, create a new iSCSI target, this should be pretty straight forward. Then click the Disable button to take the iSCSI target offline. I have found out that you cannot add a VAAI enabled LUN to an iSCSI target when it is online. I have talked to support and this issue is being fixed for their next patch release because at this point you CANNOT dynamically add new VAAI enabled LUNs.
Create a new iSCSI LUN, make sure it is VAAI Enabled (note that you MUST use Thin Provisioning on VAAI enabled LUNs. You cannot choose to have a thick LUN with VAAI enabled).
Go back to your iSCSI target and enable it.
The problem I'm seeing is that when I mount an iSCSI target to my host, the LUNs are not showing up with the correct amount of storage and are showing only 32KB. In addition, it will only show LUN 0, even though I have 4 iSCSI LUNs attached to this iSCSI Target. Therefore, it means I cannot mount and format my datastores. Again, this is a problem *I* have been seeing and not everyone, so YMMV. If I mount the same iSCSI Target to my ESXi 5.0 Update 1 host, I still see the same error, but there are some hints. As you can see the, there is a difference in the types of iSCSI LUNs depicted in the identifier with t10 vs naa. This is leading me to believe I may have an LIO vs IET iSCSI Driver issue as I had pointed out back in Fixing Synology DS411+ iSCSI Connection Drops for VMware ESX.
Update 9/21/2012 2:08pm EST
As some comments have rolled in, not all NFS issues have been fixed. In addition, some more people are saying that for iSCSI to work, you need to completely wipe any and all traces of existing iSCSI targets and LUNs in the DSM and re-create them from scratch. This will fix some issues. In addition, here is another email I received from Synology support relating to the iSCSI issue noted from this morning:
Like I said in my last email, you need to wait for the hotfix or do a fresh install of the DSM using 4.1-2636. Doing anything else is just spinning tires, you have unfortunately hit a limitation. It is not a LIO vs IET issue, please refrain from spreading this disinformation.
I will update you as soon as I know when the hotfix will be released, but it’s still tentatively scheduled for early next month.
For the time being, I going to refrain from updating to vSphere 5.1 and will wait for the hotfix patch.
Update 10/02/2012 11:50am EST
Synology has released an update DSM 4.1-2647 that has fixed many of the bugs we have been seeing. This morning I updated my Synology, let it reboot, then continued with my 5.1 tests. Mounting a NFS volume works flawlessly. Running a VM from the NFS volume proved no problem. I then created a new iSCSI Target and created a new VAAI enable iSCSI Lun. I was successfully able to mount the iSCSI Target to my 5.1 host and complete an installation of Windows 2008 R2. I did recieve a few of the errors that we saw originally but only a handful compared to the hundred before. This log was for the entire Win2k8R2 installation and then deleting the VM. In addition, the bugs that I had seen previously when trying to remove VAAI Luns and iSCSI drops has been addressed as I was told by Synology support in my last update.
Here is what you need to know. Old iSCSI CANNOT be updated to the new VAAI format. This is what I suggest to do before migrating to 5.1. Create a brand new iSCSI Target and associate some new iSCSI VAAI LUNs to associate with it.
1. Present this new iSCSI Target to your vSphere 5.0 hosts.
2. Create the datastores, change multipathing to Round Robin, enable Storage I/o Control, and create a datastore cluster
3. Go to your datastore cluster view tab, select all VMs in the old LUNs
4. Right Click, select Migrate
5. Select to change the datastore
6. Make sure that the Format is the same as the Source, then select your newly created iSCSI Datastore Cluster as your target in the Incompatible section. You will see some warning, but the SvMotions will happen without problems. Click OK
7. Storage vMotions will begin to take place.
now you can upgrade your hosts to 5.1. I'm still going to hold off for a bit until View is supported because that's how I access my lab externally when I'm on the road.
|Last Updated on Tuesday, 02 October 2012 11:54|