BSA 728x90 Center Banner

How to Change Ownership (chown) of an OpenFiler NFS Share between Cells in vCloud Director

I'm building out a new vCloud environment and I'm building it on a block only Vblock. That means the array doesn't have NFS capabilities to present out. I had to be creative and I figured creating an OpenFiler VM would be easy. You need to have a NFS share created that all cells can access because that's where transfers happen. Read about that more at Chris Colottis blog Gotcha: vCloud Director Clone Wars Part 1 (Overview)

 

I'm not going to teach you how to create a NFS share on an OpenFiler VM because that's been done before. Read here Configure NFS shares in Openfiler for your vSphere homelab

 

I eat my own dog food and follow my own how-to's and when I got to Step 9 in How To Install VMware vCloud Director 1.5 From Beginning to End about setting permissions on the NFS share, I ran into an issue. When trying to change the ownership with my usual chown -r "vcloud:vcloud" transfer/ , it said the operation was not permitted, even though I was root.

Read more: How to Change Ownership (chown) of an OpenFiler NFS Share between Cells in vCloud Director

VMware vSphere 5 Host NIC Network Design Layout and vSwitch Configuration [Major Update]

This is an update to an older post and I wanted to overhaul it for the Indy VMUG... This was also another VMworld submission that didn't get the votes. See what you guys are missing out on? :)

 

As vSphere has progressed, my current 6, 10, and 12 NIC designs have slowly depreciated. In an effort to update these to VMware vSphere 5, I took the 2 most popular configurations of 6 and 10 NICs and updated the Visios to make them a bit more pretty. I also don't know how much longer these will be necessary as the industry moves forward with 10GbE as a standard. I also added in a few more designs as inclusion of Fiber Channel has also been requested.

 

The assumption of these physical NIC designs is that these hosts are going to be configured with Enterprise Plus Licensing so all the vSphere features can be used. I didn't create a bunch of different designs as before because ideally you would want to stick with a simple design that meets a bunch of criteria for most people. In addition, I have updated these configs for performing multi-pathing for iSCSI and removing the use of etherchannel configurations because those were mostly needed on standard vSwitch configurations. I would also recommend to start moving everything over to a vNetwork Distributed Switch configuration because it is the easiest way to standardize across all of your hosts. vSphere 5 implemented a better HA and failed host policy in vSphere 5 so the use of a hybrid solution is fading as well. So if you are using a standard vSwitch, please make adjustments appropriately.

 

They key to any design is discovering requirements. I shouldn’t have to say it because that’s really the first rule of design. That being said, I don’t think I need to talk about it any deeper. Once you have your design requirements, you need to start thinking about the rest of these components. The goal of any good design is finding a perfect balance of redundancy and performance. You need to strive for that perfect balance on redundancy and performance while keeping budgetary constraints in mind.

 

Read more: VMware vSphere 5 Host NIC Network Design Layout and vSwitch Configuration [Major Update]

List of VMware Default Usernames and Passwords

Here is a comprehensive list of default username and passwords for most of the VMware products. If you're like me, you tend to get alot of these confused. If I left any off, please let me know in the comments.

 

Horizon Application Manager

http://IPorDNS/SAAS/login/0

http://IPorDNS

 

Horizon Connector

https://IPorDNS:8443/

 

vCenter Appliance Configuration

https://IPorDNS_of_Server:5480

username: root

password: vmware

Read more: List of VMware Default Usernames and Passwords

Top 9 Critical Design Concepts of vCloud Director

Here was a VMworld session that didn't get picked, so here we go...

 

The biggest difference between vSphere and vCloud is that whatever you are trying to accomplish in vCloud Director all depends on the design. With vSphere, there are pretty standard practices to designing a standard layout. There are options in vSphere design to meet certain criteria, but you ultimately can't design for vCloud Director until you have a strong understanding of the effects of vSphere design.

 

After designing for a few vCloud Director environments, I wanted to create a list that anyone can reference so you can nail down the top design criteria. Without further ado in no particular order..

 

1. Is vCloud Director actually necessary?

I'm not going to lie, there is a bunch of hype out there about vCloud Director. Since the inception of Project Redwood, it was touted as the generation of VMware's cloud offering. Every product from every vendor is working on vCloud integration, with partners, contractors, and vendors are pushing for it's adoption, and VMware itself seeing vCloud being the next vSphere. But what does that mean for you? I would imagine atleast 90% of IT shops today have some sort of virtualized environment and that's the stepping stone. If you are thinking of adopting vCloud, you have to ask yourself, "what am I really trying to accomplish?"

 

The answer to this question is going to be unique for everyone. Are you a service provider, an enterprise customer, or a SMB user? Are you just looking for a portal with a self-service catalog? Are you trying to create multi-tenant networks?

 

Read more: Top 9 Critical Design Concepts of vCloud Director

vSphere and vCloud Host 10Gb NIC Design with UCS & More

I've done vSphere 5 NIC designs using 6 NICs and 10 NICs but this one is going to be a bit different. I'm only going to focus on 10GbE NIC designs as well as Cisco UCS. Let's be honest with ourselves, 10GbE is what everyone is moving to, and if you are implementing vCloud Director, it's probably going to be in a 10GbE environment anyway.

 

I've always considered that a good vCloud design is based on a good vSphere design, which still stands and holds true for the most part. In a few recent engagements I've been involved in, I've seen architects want to use 4 NICs for their vCloud Hosts… and here's why

 

When you are designing a vCloud environment, most people will tend to use VCDNI (vCloud Director Network Isolation… soon to be VXLAN) which involves a little bit more complexity when it comes to the design. During the deployment of a VCDNI network a new port group is created on the vNetwork Distributed Switch (vDS). This port group is automatically created by vCloud Director and therefore inherits the following features (which are strongly recommended to NEVER change):

  • The default NIC behavior is to always choose dvUplink1 on the vSphere Distributed Switch
  • dvUplink1 is set as the Active NIC while all other NICs attached to the vDS are set to Stand-by
  • The port group is set as "route based on originating port id"
  • The security settings are the original defaults with:
    • Promiscious Mode: Reject
    • MAC Address Changes: Accept
    • Forged Transmits: Accept

 

So what does this potentially mean for your design and NIC considerations? It usually means that the NIC assigned to dvUplink 1 will be constantly utilized. I honestly don't know if the VCDNI/VXLAN port groups will choose a different dvUplink other than dvUplink1, but in all of my testing only dvUplink1 has been chosen. You need to take this into account so these are a few designs I have created using 4x10GbE, 2x10GbE and 2x1GbE, and 2x10GbE with UCS.

 

Read more: vSphere and vCloud Host 10Gb NIC Design with UCS & More

vCenter and vCloud Management Design - Management Separation

While at a customer site this past week, I was confronted with a situation. But before I get to that, lets talk about vCenter and vCloud Design.

 

First thing is first, you should be vaguely familiar with vCloud Architecture Toolkit (vCAT). One important topic it discusses is the placement and use of vCenter when it comes to vCloud Director. It's a recommended practice to have 2 vCenter servers in a vCloud environment. Use 1 vCenter server for hosting Datacenters/Clusters/VMs that are relevant to vSphere and vCloud Infrastructure Components. Use another vCenter server for hosting vCloud Resources. Why's this?

  1. Separation of management domains. It's important to know that vSphere and vCloud are different animals. Just because you are a vSphere admin, it doesn't make you a vCloud admin. By separating the two environments, you are letting vSphere admins access VMs that are outside the Cloud, and manage VMs that are considered vCloud Infrastructure.
  2. vCenter becomes abstracted. ESXi abstracts the hardware layer, and vCenter is the central management point. vCloud Director abstracts the resources that belong to vCenter and present those to vCloud as Provider Virtual Datacenters.
  3. Saves vSphere Admins from themselves. Have you've ever watched what happens when you add a vCenter server to vCloud Director? vCloud Director takes charge. It does it's own thing by creating folders, resource pools, port groups, appliances, etc. Everything that is created by vCloud has a set of characters that proceed it to become unique identifiers. If a vSphere admin has access to a Distributed Virtual Switch, and notices some random portgroup ending with HFE2342-FEF2123NJE-234, he is probably tempted to delete it. If a user goes crazy and starts deleting objects directly from vCenter without vCloud's knowledge, its havoc.
  4. Relieve Stress on vCenter. As Duncan pointed out below in the comments, if a tenant of the cloud is issuing a bunch of requests, it could possible render the vCenter server unusable. By separating out the workload among 2 vCenter functions, you will not impact a vCenter server responsible for management functions.

 

Read more: vCenter and vCloud Management Design - Management Separation

The VCE Certification Matrix - Ensuring Integration

Other than some cool technology, VCE brings many more benefits to the table. Today, I'm going to dive into the certification matrix. Before we actually talk about the matrix, lets look at what comes before then. Since VCE is comprised of parent company products (VMware, Cisco, EMC, and Intel), we have alpha access to not only new software level releases, but as well as new hardware. VCE's Platform Engineering team is tasked with taking existing hardware as well has new hardware, and coming up with a physical and logical architecture design that meets requirements based upon performance, environmentals, and ease of repeatability. After Platform Engineering has created a validated design, they need to keep the architecture up to date with new hardware released from parent companies which may or may not require the swapping of components for new Vblocks. Have you ever calculated how much time it takes someone like yourself to completely design a virutalized infrastructure such as this? There are literally thousands of options from the main hardware components, types of cables and interconnects, power options, and placement of equipment into racks which dictates power draw, BTUs, and cable lengths. On average, it's close to 300 pieces that are aggregated together.


Now Quality Assurance steps in. QA is responsible for the integration and testing of the design. This integration testing is performed so we know that all components are able to communicate with one another and the Vblock talks as a complete system. This is the first part of the validation process. After all components are validated, their firmware and software versions are noted and then the regression testing begins. This base regression test is for existing component verification. Afterwards, new functionality tests are then performed so new features are fully tested and considered working throughout the stack. Lastly, interoperability tests happen by additional e-lab teams. An example of some tests are: installation and configuration, recovery, stability, load/performance, patch and maintenance testing, interface testing inbound and outbound, negative test strategies, provisioning with and without cloud products like UIM, usability, and physical testing of fault tolerant systems, visual indicators, and thermals. In fact, check out these numbers from January 2012. Over 2000 hours of testing are put into Vblock Platforms during every major release cycle.

Read more: The VCE Certification Matrix - Ensuring Integration

Synology DSM 4.0 Supports VAAI in vSphere 5 For Home Labs

Home Labbers can rejoice! Have you ever wanted a home lab where you had VAAI capabilities with iSCSI on vSphere? Synology has released a new software version for their NAS boxes DSM 4.0. They don't really point it out anywhere on their main site, but if you read the release notes, it's there. Here's the release notes for my current home NAS from my Green Machine Home Lab build - Synology DiskStation DS411+ Release Notes.

 

There are all kinds of new features in DSM 4.0, but when I read the following I had to do a double take:

20. VMware vSphere 5.0 VAAI Support:
Atomic Test & Set (ATS) supported to eliminate LUN-level locking based on SCSI reservations and improve data access performance.

 

Even though the VAAI support is only for ATS on block level iSCSI datastores, it's a start. So what's ATS? According to VMware Storage Blog:

"ATS is used for locking on VMFS datastores, and is far superior to the SCSI Reservation locking technique. There is now more use made of ATS on block storage. One example is that in previous versions of VAAI ATS, when there was a mid-air collision of ATS (i.e. two ESXi trying to lock the same file), the VMkernel would revert to using a SCSI reservation. We no longer do this, but use an ATS retry mechanism."

Read more: Synology DSM 4.0 Supports VAAI in vSphere 5 For Home Labs

kendrickcoleman.com Found A New Home

After 96 hours of downtime, kendrickcoleman.com is alive once again. This is my post-mortem of the experience and a little piece of my mind.

 

I woke up around 8am EST Saturday May 5th to see a tweet from @BasRaayman that said "Kenny, you might want to check with your hoster, your blog is stating "This Account Has Been Suspended"?". Great, what a fantastic way to start my Derby day. I look at my email and I see "WEB HOSTING ACCOUNT DEACTIVATED" in all caps.

 

Here are some snippets from the email:


Dear Kendrick:

It has come to our attention that your site is using an extreme amount of resources on our servers and network; consequently, your site has outgrown the Just Host shared hosting package.  It has become increasingly more difficult to keep your site online, and doing so now degrades the experience for the other users on our servers. Others who have experienced such growth with their site(s) and hosting needs have chosen to move their site(s) to a Dedicated Server or Virtual Private Server (VPS). Just Host does not currently offer Dedicated or VPS hosting. These alternative hosting services will include more resources and site stability, as well as better backup and server redundancy.

Read more: kendrickcoleman.com Found A New Home

Free VCE Training for all VCE Certified Partners

I got an email today that talks about some of the Partner News going on in the VCE Community. The big part was that there is a new series on the Vblock 700LX which uses the VMAXe as part of the platform. VCE is continuing to expand its options for customers using everything we can out of VMware, Cisco, and EMC to build the best solutions possible. I navigated to the homepage where the courses are listed and saw some really cool stuff.

 

So if you are a VCE Certified Partner, EMC Employee, or VCE Employee, go ahead and login to EMC's Powerlink site and go to https://education.emc.com/part/campaign/vce.aspx.

 

Read more: Free VCE Training for all VCE Certified Partners

Page 13 of 32

Related Items

Related Tags