Home | Cloud | Review: Zadara Storage Virtual Private Storage Array
Review: Zadara Storage Virtual Private Storage Array

Review: Zadara Storage Virtual Private Storage Array

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

Over the last week I’ve been working with and reviewing the Virtual Private Storage Array (or VPSA) offering from Zadara Storage.  VPSA provides dedicated storage resources for cloud computing environments within Cloud Service Provider data centres like Amazon Web Services.  It seems somewhat counter-intuitive to describe dedicated storage resources as “software defined”, but that’s exactly how the company is pitching their product.  So what’s on offer and how exactly is this an SDS play?

Cloud Storage Resources

One of the obvious benefits of cloud computing resources is the abstraction of the actual hardware used to deploy features such as compute, networking and storage.  These resources should be service rather than hardware-defined and in the case of storage, cover metrics such as capacity, performance and reliability/availability.  Looking at AWS as an example, customers can choose either EBS (Elastic Block Store) or S3 (an object store) for their data.  In the case of EBS, there are a few restrictions; volumes are limited to 1TB in size; volumes can only be connected to a single host (not clustered) and most importantly, volumes only deliver around 100 IOPS of throughput, although they can burst higher.  AWS does offer “Provisioned IOPS Volumes”, which provide higher throughput, although the actual performance figures are tied to a ratio of the capacity of the volume.  S3 in comparison is an object store where the entire object must either retrieved or stored in a single operation.  This isn’t flexible or efficient for large files where only part of a file may be changed.  VPSA is looking to fill the gap by resolving some of the issues of using standard cloud-based storage volumes.

How It Works

VPSA is deployed on physical hardware either in or co-located with the cloud provider’s data centre.  In the case of AWS (which I used for this trial) this is implemented using Direct Connect, which requires some additional configuration that I will get onto later.  The VPSA hardware consists of a number of servers with dedicated storage media, both spinning and solid state.  Typical drive options are 300GB 15K SAS, 100GB SSD and 3TB SATA, although the specific options available currently vary by provider and location.  When a customer creates a VPSA, physical disk resources are assigned to the virtual array, along with a slice of CPU and memory, depending on the size of VPSA instance chosen.  Currently there are four levels, from Baby to Blazing, with each level doubling the resources of the previous on.  For example Baby arrays have 1CPU, 4GB of RAM and support up to 5 physical drives.  Blazing arrays have 8 CPUs, 32GB of RAM and support up to 40 drives.  Once created, drives and memory can be added and arrays upgraded to a more powerful model dynamically without impact or outage.  Each VPSA consists of two controllers in an active/passive configuration, providing resiliency in the event of a hardware failure.

Once disks are assigned to a VPSA, they are treated in the same way as in a physical storage array and so can be built into RAID groups from which logical volumes are assigned (RAID-1, RAID-5 & RAID-6 is provided).  I used a simple Baby VPSA configuration with four drives, arranged in a mirrored RAID-1 (RAID-10) group.  From there I created both block-based iSCSI volumes and NFS/SMB share volumes.  All of the standard features of a physical array are available to the user.  These include

  • Security of data-in-flight through IPsec and mutual CHAP for ISCSI traffic and encryption at rest on disk.
  • Integration with Active Directory for SMB shares and local users for NFS.
  • Thin provisioning for block-based volumes.
  • Local snapshots for data protection.  Snapshots can be taken as frequently as every minute.
  • Remote replication (or mirroring) to another VPSA, which can be geographically distant (e.g. in another AWS availability zone).  Replications are implemented at the volume level with a minimum interval of one hour.
All of the VPSA features are accessed through a web portal dashboard, which is accessible only inside the customer’s virtual private network.  As mentioned earlier, additional work is required to connect the VPSA into the user’s cloud compute environment.  In the case of AWS, that means creating a Virtual Private Cloud, a Virtual Private Gateway and assigning that to a resilient pair of Direct Connect interfaces to Zadara’s routers.  This is done on a invitation-basis, that is, the Zadara routers permit connectivity to the user’s VPG connection.


A solution couldn’t be correctly classed as a service if it didn’t include metering, reporting and monitoring features.  VPSA provides full monitoring/metering for each volume (see the screenshots at the end of this post) with an audit log of all activities applied to the volume itself.  Metering is also available on pools, RAID groups and physical disks, allowing each piece of the infrastructure to be analysed; a feature that is important in larger implementations.


VPSAs are charged by the hour based on two components.  There is a charge for the VPSA itself, dependent on the configuration/size chosen.  There is then an additional charge for each drive used in the configuration.  See the Codex link at the end of this post for more details.  Pricing of the VPSA rises proportionally in line with the configuration chosen.


I did run some performance tests against the VPSA.  Without having an understanding of the underlying hardware (and without doing lots of comparison tests), it’s difficult to make judgements on performance.  However I was able to achieve a consistent response time and throughput with IOMETER using a Windows Server 2008 iSCSI LUN and to achieve a higher level of performance than I could have done with EBS.  One interesting consideration with an implementation of this kind is how well Zadara (and the service provider) could and would respond when performance problems are encountered.  That’s something we will have to see as deployments increase.

The Architect’s View

As someone who has come from the traditional storage management area, the VPSA feels comfortable and easy to use and in many respects appears to resemble many of the smaller SMB type arrays available today.  This is a positive thing, as it means the VPSA is easy to understand and configure.  Clearly many people will question the need to deploy physical server resources in a cloud service provider’s data centre and VPSA provides a number of distinct benefits that AWS and other providers can’t offer including the guarantee of physically isolated data, volume clustering, more manageable performance and the ability to replicate at the volume level between service providers.  As part of a transition to cloud, many potential customers may see the VPSA as a good way to transition into consumption of cloud services while retaining some of their current design principles.

I see one great potential opportunity for Zadara and that’s to consider packaging their VPSA as a VSA for deployment in private clouds on customer premises.  With replication, this could offer an easy way of getting data and applications into AWS and other cloud providers’ environments.  It also provides the opportunity to align private and public clouds in terms of storage architectures.  In terms of threats, the risk for the company is that service providers decide to implement this kind of solution themselves.  Microsoft for instance, could use cut down versions of Windows Server 2012 as storage servers and quite quickly implement the kind of functionality that VPSA offers.  Like everything keeping ahead of the game will be about developing new features and offerings; something I hope it will be possible to write about in the future.


Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair for others to judge context.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).


Copyright (c) 2009-2014 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Noam Shendar

    Chris, thank you very much for spending time with our service and taking the time to share your impressions. We’re glad you liked it!

    By the way, when we compare ourselves to Windows Server 2012 and other “Server SAN” solutions, we see some differences. We offer QoS, distribution of resources across multiple servers, per-tenant dedicated (on-demand) drives and other resources, true HA, granular control, etc. Is that a fair distinction, in your opinion?

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×