Home | Virtualisation | Why Have VVOLs Taken So Long to Deliver?
Why Have VVOLs Taken So Long to Deliver?

Why Have VVOLs Taken So Long to Deliver?

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

Another VMworld has come and gone and still we haven’t seen the production deployment of VVOLs.  Just to recap, VVOLs are the evolution of the packaging of virtual machines that currently reside in data stores.  The main benefit is to be able to apply storage performance and availability policies to a VM object rather than an entire datastore as we do today. Although we’ve seen demonstrations of VVOLs for some time, VVOL code has not yet made it into a GA vSphere launch.  Presumably this will change with the release of vSphere 6.0 sometime next year.  The challenges around what seems like a trivial change in storage deployment are pretty immense and require work from both the hypervisor and storage side.

Storage Changes

From the perspective of the storage array, a VVOL seems to look like nothing more than a small LUN (at least on block-based storage anyway).  Many of the references to VVOL talk about a storage endpoint (which is still presumably an existing LUN) and a number of objects that reference the VVOL itself.  For the storage vendor this means amending their array design and potentially supporting significantly more storage objects than before, where each of these objects could also be a snapshot or replica.  Rather than thinking in thousands of LUNs, systems could have to support hundreds of thousands of VVOL objects, which represents a big overhead in DRAM memory/cache requirements and the in-memory structures used to represent them.  This change alone could represent massive overhead and impact array performance.

Then we have to think about other system functions like replication, (array-based) snapshots, VAAI offload, all of which need to be changed to a more refined level of granularity.  Again this can represent a significant design problem for array vendors.  Think also about array connectivity (how each object will be addressed over Fibre Channel, FCoE, iSCSI and NFS) and how those objects will be queued on a connected port.  Finally there’s the whole benefit of VVOLs, the ability to apply independent QoS (Quality of Service) to each object.  Many arrays today don’t even offer this for traditional LUNs.

Hypervisor Changes

Naturally there are many changes to the hypervisor.  The most obvious is addressability of VVOLs through existing storage protocols (the other end of the issue for the storage array), but also the interaction between hypervisor and array needs to be strengthened to ensure both are working in harmony than against each other.  As a case in point, imagine running Storage DRS in vSphere while the array attempts to balance performance and latency at it’s end. If policies on both components are misaligned, sDRS could attempt to move data around to fix a throughput issue that was deliberately introduced by the array, resulting in even more performance problems.

Features such as VASA will need to both provide additional data and allow the hypervisor to specify QoS requirements at the VVOL level with the array feeding back if policy settings can’t be achieved.

The Architect’s View

VVOLs seem simple on the surface, however what’s simple in concept is rarely simple in execution.  VVOLs have taken time to arrive and even then, functionality may be limited to VM addressability rather than full QoS functionality.  I imagine we will see a league table of vendors who are able to support full VVOL capabilities and that will provide a good indicator of today’s advanced versus legacy storage architectures.

Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

  Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).

Copyright (c) 2009-2014 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://www.hp.com/storage/blog calvinz

    Hey Chris,
    I can only speak to this from an HP Storage perspective but we made a dramatic increase to the number of objects an HP 3PAR can manage because we knew this was needed with vVols. This was a change that was made in 3PAR OS vs 3.1.3 and the max supported objects/volumes was increased to 64K.

    I think other storage vendors using a traditional 2-controller active/passive array (like VNX) will struggle with support for vVols because it does dramatically increase the number of LUNs per VM. I do think your statement of “hundreds of thousands of VVOL objects” is the extreme. We think a VM today will probably require an average of 8 objects with vVOls, though that could go up with lots of snapshots.

    If you didn’t know, 3PAR is the exclusive FC development platform that VMware is using for vVols – 3PAR will be there on day 1 with vVols. I’ll be shocked to see VMware’s parent company in that position.

    Lastly, I have a post on my blog that has audio and slides from our HP and VMware vVol session from VMworld – definitely worth checking out. http://hpstorage.me/vVolsAtVMworld2014

  • mimmus

    For disaster/recover pourposes, it is actually simple putting all involved VMs in a few of datastores, activating storage replica between these.
    Thinking to configure “hundred of thousands” of replicas, it seems actually a nightmare.

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×