Machine Creation Services IO Acceleration

06 July 2016 | By Tye Barker

With the release of XenDesktop 7.9, Citrix has released a new optimised Machine Creation Services (MCS) cache model to rival that of Citrix Provisioning Services (PVS) ‘Write Cache on RAM, Overflow to Disk’. In fact, the way that this cache operates is very similar to the way PVS manages to achieve its great IOPS performance benefits, and initial testing puts it only slightly behind in terms of IOPS (specifically for Read I/O). Today’s post takes a look this new feature and the performance benefits that this feature introduces to MCS.

Prior to the creation of the Machine Catalogs, we need to configure the storage types connected to the hypervisor via the Citrix Studio console. The configuration options include:

  • OS Storage. This is where all the read IO will originate from. This will typically be the Gold image storage or snapshots. If we are utilising the new cache features for MCS, the write IO to this storage volume should be minimal (except when performing image updates). Knowing the limited write IO profile, a storage volume optimised for read IO may be considered, offering fast read transactions, and slower write transactions.
  • Temporary Storage. The storage defined in this section is utilised to cache to client attached disks. If configured to overflow from RAM to disk (or directly to disk), this is the storage repository that will be written to. Knowing that this storage is potentially write intensive (depending on RAM cache configuration), we should look at optimising this storage volume for random write IO.
  • PvD Storage. Utilised for storage of user specific applications and data. Not in the scope of this post.

At initial release, both desktop and server operating systems are supported. However, only desktop Machine Catalogs where Pooled-Random or Pooled Static VMs are used will be able to take advantage of MCS IO Acceleration. Machine Catalogs utilising Pooled with Personal vDisk or dedicated are presently not supported.

When creating a new MCS Machine Catalog, similar to PVS, we are presented with three specific client side-caching options. We can elect a size for both memory and disk, or choose to not specify any values, and MCS will continue to traditionally provision machines without caching data. When configuring the cache values, keep in mind that if only caching to RAM (disk cache deselected), this will effectively use the PVS equivalent of ‘Cache to RAM’, and that this introduces a potential risk of the VM blue-screening when the cache in RAM fills up, and there is no further space to write to.

Image

Ideally, selecting both check boxes allows a fall back, in the event the cache in RAM is full, it will overflow to the local VM disk, the same behaviour as PVS’s ‘Write Cache in RAM, Overflow to Disk’. In fact, if we look at the Perfmon counter ‘non-paged memory pooled (bytes)’ we can observe that as cache in RAM fills up and reaches a hard limit (as specified in the Machine Catalog), it will then proceed to write to local client disk (Temporary storage configured on the XenDesktop host connection).

New to XenDesktop 7.9, Citrix has introduced a raft of new Perfmon counters specifically for MCS that gives us granular detail into how much data is cached, the speed of the transfers (read and write) for memory, disk, and system.

We can now observe write cache utilisation statistics through these Citrix Perfmon counters, allowing us to adequately adjust and validate cache sizes without the need for digging into WMI calls.

In the above example, the Machine Catalog has been configured with a 1GB cache in RAM allotment, and we are copying a 2.6GB ISO file from a network CIFS share locally to the VM. Once the cache in RAM allotment has been reached (red line), we can then see growth in the disk cache size (blue line). We can also see the significant increase in read and write performance when caching to memory, and the substantial drop as it switches over to disk cache. Again, these are provided by Citrix as new Perfmon counters as illustrated below.

When deleting the ISO file from the VM, we can then see that the cache in RAM drops, just as PVS Pool non-paged memory would.

Things start to get a little interesting with the disk cache at this stage, as we also can observe that this disk cache is cleared. This is a major difference to how PVS Write Cache behaves, as PVS does not reclaim this space. With PVS, the write cache file (.vhdx) that resides on the file system will continue to grow, irrespective of whether a file is deleted, user profile is removed at logoff, etc. This essentially means with MCS, you have more space on your cache drive available over time, and you may extend your machine reboot cycles if required.

The new cache options in MCS certainly go a long way to levelling the playing field when compared to PVS. PVS however, still maintains a major advantage when it comes specifically to Gold image read IO, as the PVS servers’ Windows System Cache may be leveraged to cache vDisk contents to RAM. With MCS, we now have the ability to place this Gold image on storage optimised for read IO, but this still may be a potential bottleneck during boot/login storms. While this may be mitigated through the use of Hyper-V Clustered Shared Volume (CSV) Cache or VMware Virtual Flash Read Cache to cache read IO at the hypervisor host level (RAM or SSD), this may increase complexity or reduce the overall scalability of the solution.

Get in touch with us today to learn how we can help design/deploy/support your Citrix environment today.

Free Download

Desktop as a Service & Workspaces Whitepaper

Download Here