Over the past couple months I've had my vSphere Home Lab environment up and running and it's been pretty flawless. I've been relatively happy with my Synology DS411+ after I sorted out the IET and LIO iSCSI driver issues. This past week gave my Synology DS411+ a negative tick on the scale.
I've been testing vSphere.next (so atleast you know this SAN/NAS works with it) and I've also been testing vCloud Director.next. I wanted to make my environment as clean as possible by presenting my vCD instance with a different iSCSI target and set of LUNs from my regular vSphere environment. It should be pretty simple. Go to Storage Manager, create a new iSCSI target, create a new set of LUNs, and map those LUNs to the iSCSI target. Next, go to the Masking tab and.... wait! Why is it grayed out?
I rose the issue with support and this is what I was told:
Hello Kendrick,
Thank you for your message.
I’m sorry for the wrong information. According to the engineer, target masking is not available when using IET. It is available when LIO is used.
Therefore, the symptom you are seeing is normal since the iSCSI target type on your DS411+ is IET. At this time, I don’t know when (or if) it will be supported on IET. I will check with our engineer again and get back to you.
Thank you again for your message.
Best regards,
Sang
Synology America Corp
Well dang. So it's not the worst thing to happen in the world. I would rather have my iSCSI target always available and be without LUN masking than to have constant drops. So I figured I would go ahead and add the LUNs to the original target and just have everything available to both environments. I'll just need to make sure I only use what I should in both. I went ahead and mapped the targets to the LUNs and clicked OK.
What happened next was brutal. The Synology unit froze up while applying the settings. CIFS and iSCSI connections all dropped and never came back on. I couldn't get the screen to refresh within the unit either. I ended up having to hold down the power button to completely shut it off, then turned it back on. After coming back on, the ESXi hosts had some trouble figuring out who had what VM, but for the most part, alot of VMs never even shut down. Once the Synology booted back up, I was able to console to my AD and SQL servers and they were sitting at the same screen as when I left them. Must be something new, but pretty cool. The ESXi hosts did some unusual thing, so I SSHd into them and did a /sbin/services.sh restart twice and mostly everything was working. The errors you see below are HA errors telling me the VM experienced a HA event.
To make sure I got back to the original states, I took turns putting the hosts into maintenance mode, rebooted them, cleared the errors on the VMs and it's back to normal.