Citrix announced the release of Citrix Essentials for XenServer and Hyper-V. Yes – Essentials supports both XenServer and Hyper-V. Prior to the announcement, Citrix gave me the opportunity to evaluate Citrix Essentials in my lab, which I naturally accepted. For background, Citrix Essentials is offered as Enterprise or Platinum editions for Hyper-V and XenServer. Table 1 lists each edition’s features.

Version Features
Hyper-V Enterprise

- Citrix StorageLink technology

- Dynamic Provisioning for Virtual Machines

Hyper-V Platinum

- Lab automation and VM lifecycle management

- Citrix StorageLink technology

- Dynamic provisioning for physical and virtual machines

XenServer Enterprise

- Citrix StorageLink technology

- High availability

- Dynamic workload balancing (Q2’09)

XenServer Platinum

- Lab automation and VM lifecycle management-

- Citrix StorageLink technology

- Dynamic provisioning for physical and virtual machines

- High availability

- Dynamic workload balancing (Q2’09)

Table 1: Citrix Essentials editions and related features

Storage management is provided by a new product – Citrix Virtual Storage Manager (CVSM) and its underlying StorageLink technology (more on that in a minute). Dynamic VM and physical system provisioning is provided by Citrix Provisioning Server. Lab automation and VM lifecycle management is provided as an OEM version of VMLogix Lab Manager and Lifecycle Manager.

Citrix has the lead today with dynamic provisioning since VMware has no tool for automatically provisioning bare metal servers. However, that will be short-lived in my opinion; I think a stateless ESX server that loads on the bare metal (in physical RAM) is an inevitable part of VMware’s future. Still, our world is not yet entirely virtual, and Citrix Provisioning Server can provision any server role (not just hypervisors).

Now I would like to spend a few moments on my initial impressions of CVSM and StorageLink, which I had the opportunity to evaluate for a little more than a week in my lab. CVSM (shown in Figure 1) solves the major Hyper-V storage management hurdles by streamlining LUN provisioning and VM deployment.

citrix VSM UI

Figure 1: CVSM user interface

Consider StorageLink to be the glue the connects physical storage assets and virtual infrastructure management. StorageLink consists of the following components:

  • StorageLink Gateway - enables automated discovery and one-click access to native storage services (e.g., DAS, NAS, and FC or iSCSI SAN).
  • StorageLink Resource Manager - makes common storage array administrative actions (e.g., partitioning, snapshots, replication, and backup) visible from within the virtualization management environment.
  • StorageLink Image Manager - manages a centralized library of VM images that can be instantly converted between XenServer and Hyper-V, and rapidly cloned and delivered to any number of target hosts using array-level features.
  • StorageLink Connect - provides a set of open APIs that integrate XenServer and Hyper-V environments with third party backup solutions and enterprise management frameworks.

Tighter storage integration is a big deal, as LUN provisioning can easily drive up virtualization TCO and is often cited by VMware as one of the benefits to VMFS. In fact, about a month ago, one of my friends who manages a virtual infrastructure consisting of nearly 1,000 VMs scoffed at the idea of provisioning a LUN per VM with the thought “Who on earth would want that headache?” The LUN-per-VM model does have its merits, including performance that can be gained through I/O caching at the array (when dozens of VMs share a LUN, it’s hard for the array to make sense of I/O traffic and as a result offer any caching benefit). Note that LUN-per VM (either via a dedicated VMFS volume or as a mapped raw LUN) is also available with VMware ESX infrastructures.

StorageLink currently integrates with arrays via SMI-S and offers custom integration for NetApp, Dell EqualLogic, and HP storage. Note that at this point, SMI-S array-level integration is more of a checkbox than a working feature Citrix could not offer any examples of an array they can be fully managed via SMI-S. But really… who can?

Array-level integration is significant. In my evaluation, I was able to leverage the NetApp thin provisioning and snapshot features, resulting in considerable storage savings for my Hyper-V VMs. Snapshot integration allows you to quickly provision VMs by taking a snapshot of a source LUN. The result is that newly deployed VMs can leverage the source snapshot image; this means that storage is only required for new writes to the newly provisioned VM. Here are the major steps for configuring CVSM to rapidly provision new VMs:

  1. Using the Hyper-V management tool or System Center Virtual Machine Manager (SCVMM), create a new VM without any hard disks. In my case, I created a new VM named “XP-tmplt.”
  2. Install the CVMS service and CVSM management tool.
  3. Open the CVSM management tool, connect to the CVSM service, and then add connections to the Hyper-V physical hosts you would like to manage.
  4. Create a connection to the storage array you would like to manage using CVSM.
  5. Create a CVSM storage repository (see Figure 2). The storage repository includes a defined amount of capacity, capabilities, and LUNs for use by CVSM. In my test, I created a RAID 4 storage repository that utilized the array’s thin provisioning and deduplication features.
  6. Use CVSM to allocate a storage volume from the newly created storage repository. The storage volume will act as a passthrough (e.g., raw LUN) for the baseline template VM. 
  7. Use CVSM to associate the storage volume created in step 6 with the VM created in step 1.
  8. Use the Hyper-V or SCVMM console to boot the VM (e.g. XP-tmplt) and install the base OS image, along with any common applications. In my case, I installed Windows XP SP3, created a sysprep answer file (sysprep.inf), and then sysprepped the VM (which automatically shut down the VM).
  9. Use CVSM to create a new storage profile that references the volume created in step 6 (see Figure 3). Note that the volume now contains the sysprepped Windows XP image.
  10. Use CVSM to create a new VM template (Figure 4). The VM template sets the baseline configuration (e.g. memory, CPU, guest OS) for a VM and associates the VM with a storage profile. The VM template is what is used to quickly provision new VMs.
  11. Use CVSM to deploy new VMs from the VM template created in step 9 (see Figure 5) using the array’s snapshot features. You can use CVSM to deploy as many VMs as you like. Each deployed VM will contain a numerical suffix, starting with 00. In my test, I deployed five VMs, and CVSM named them BG-RIC.00 to BG-RIC.04. The five VMs were created and ready to power-on in SCVMM in less than 30 seconds! Figure 6 shows the configured VMs in the SCVMM UI. 

netappsr

Figure 2: Creating a new storage repository

newstorageprofile

Figure 3: Creating a new storage profile

createvmtemplate

Figure 4: Creating a new VM template

creatvmfromtemplate

Figure 5: Deploying new VMs from a VM template

  VMM UI

Figure 6: The deployed VMs will automatically appear in the SCVMM UI

As you can see, this is a massive improvement for Hyper-V VM provisioning on networked storage. My experience with the software was basically what I’ve come to expect from beta software, and did include one hurdle to overcome – CVSM provisioned storage and created VMs without a problem, but did not create the VM passthrough disks and associate the correct LUNs with each. I had to do that step manually using the Hyper-V manager tool. Note that other beta testers also identified this problem, and it has been fixed. Also, only passthrough (raw) disks are supported today; virtual hard disk files are not supported. I’m hoping that virtual disks will be supported for Windows Server 2008 R2 cluster shared volumes, once they’re available.

Still, after doing the initial work to create the storage repository and VM template, VM deployment was a piece of cake. And what’ not to like about spinning up a bunch of new VMs in seconds? Sure, VMware ESX has had this integration with NetApp for awhile now, but that’s not the point. Even if CVSM is a little rough around the edges, it’s a product that is on the right track. We expect serious Hyper-V adoption to start after the Windows Server 2008 R2 release, so Citrix has some time to smooth out CVSM and add some more supported arrays to the mix. This type of technology will be welcome by our enterprise clients who have previously resulted to internal scripting in their efforts to automate LUN provisioning. Naturally, VMware could add a similar feature to vCenter.

Of course, there’s more to provisioning storage than simply allocating a LUN on an array. In fibre channel SANs, you need to worry about zoning and LUN masking too. I asked Citrix’s Pete Benoit about switch and array level integration with regards to LUN provisioning, and this was his reply:

FC zoning is in the V1.0 product.  It requires “credentials” being added for the fabric SMI-S component. We use Brocade’s SMI-S provider (Add those in the GUI and it will discover the fabrics).  For V1.0 if the fabric credentials are not set then the user is expected to have done zoning. As you are aware many enterprises only do FC zoning at well defined change management points. We support both auto-zoning and manual zoning.

The product does the LUN masking in that context. There are also CLI tools to map storage to an HBA/Initiator, if the HBA has not been zoned the action will be performed at that time. If the fabric credentials exist (set by the user) then zoning is done automatically (including multi-pathing) by detecting multiple fabrics that the HBA & Target are visible.

While CVSM features will improve, what I would really like to see is better integration with SCVMM. Ideally, CVSM should be able to be added to SCVMM as a plug-in, but the SCVMM management interface is not extensible - something I’m hoping that Microsoft will change by the next SCVMM release.

VMworld is off to a good start. The Citrix Essentials announcement is the first of what is shaping up to be many exciting announcements this week

http://www.chriswolf.com/?p=268