Home | Featured | Hardware Review: Promise SmartStor NS4600 – Part II

Hardware Review: Promise SmartStor NS4600 – Part II

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

This is a series of posts on the Promise SmartStor NS4600 home storage server.  Previous posts:

In the first post of this series, we discussed the basic hardware configuration.  This post will look at connectivity and RAID configurations supported by the NS4600.

A quick glance at the back of the unit provides a clue as to what connectivity exists.  See the first image in this post.  There is are 2x USB, 1x eSATA and 1x Ethernet ports available. The Gigabit Ethernet connection supports multiple host protocols, which we’ll discuss in more detail later.  The eSATA port provides connectivity to external devices for either backup of the NS4600 or backup of the external device to the NS4600.  The USB port also supports the same source and target backup functionality, meaning effectively all host connection protocols are IP-based.  The USB port also functions as a printer server for USB printers.

RAID Support

The NS4600 supports RAID levels 0, 1, 5 & 10, implemented by the onboard Promise PDC42819 SATA RAID Controller.  The required RAID level is specified at the time a volume is created; multiple logical volumes are supported on the NS4600 as long as there are sufficient disks available.  This means, for example, two RAID-1 volumes could be created or a RAID-5 volume with another RAID-0 volume could be established.  RAID settings are configured from the “RAID Management” option in the online GUI (PASM).  A number of example screenshots showing various RAID configurations are displayed in the gallery at the end of this post.

Volumes form the foundation of how data is presented from the NS4600.  They give the options to use RAID to manage the trade-off between capacity and performance.  For example, a single RAID-0 volume could be used for backups, while the main data is RAID-1 protected, with a final drive kept as spare.  Of course drives are hot-pluggable, so not all slots need to be initially populated.  This means drives can be added to a RAID group to increase capacity over time.  For instance two drives could be used to populate the NS4600 in the first instance.  This can then be expanded dynamically, changing the RAID level or adding additional capacity.

Although RAID is implemented in hardware, the options available are reasonably flexible in offering multiple dynamically expandable configurations.  However, I’d question whether traditional RAID implementations are the way forward in home storage devices.  Bear in mind that 2TB drives are becoming the norm and that means within 18 months to 2 years, 3/4TB and even 5TB drives will become commonplace.  As we move to much larger capacities, unrecoverable read errors become a real issue, so rather than recovering an entire disk, the ability to recover individual chunks of data is more preferable.  This is the methodology Data Robotics have implemented within their BeyondRAID technology.  Promise are playing the trade-off between rock-solid RAID-in-silicon versus RAID-in-software.  At the moment my money goes with software RAID and the enhanced flexibility it brings.

RAID Rebuild on NS4600

RAID Rebuild on NS4600

Now I like a bit of fun with RAID systems and one of my favourite tricks is to exchange RAID drives within an array.  So, on one of the NS4600’s, I powered it down, removed all the drives and powered it back up.  Fortunately, the device configuration isn’t stored on the removed disks and the NS4600 remained accessible, although protesting at the fact no drives were present.  After adding the drives back (in a random order) the NS4600 detected them and recovered the RAID sets and I was back in business.  I wouldn’t advise removing all the drives in normal practice, however if a chassis completely fails, then presumably the data can be recovered in another unit (although I haven’t tested this).

Volumes

NS4600 - Free Disks - Unconfigured Volumes

NS4600 - Free Disks - Unconfigured Volumes

Volumes are the logical entity that are created when establishing the RAID configuration of the NS4600.  The screenshot below shows the NS4600 before any volumes have been created.  There are four volumes in the free pool.  The second screenshot shows two volumes that have been created from the four available drives in my test NS4600.  Volumes and the RAID set on which they are stored are a 1:1 relationship; a volume may not span a RAID set and a RAID set may not contain more than one volume.  This may seem a little restrictive, however as we’ll see later, file systems and iSCSI LUNs are

NS4600 Multiple Volumes

NS4600 Multiple Volumes

contained within a volume and therefore volumes should be thought of as providing specific RAID availability.

Volumes can be expanded; see the screenshot, which shows a volume in the process of being expanded.  This volume is being converted from RAID-1 to RAID-5.  RAID-0 volumes can be expanded to larger RAID-0 volumes or moved to RAID-1 or RAID-5 configurations.  Basically, volumes can be increased in size or moved to a higher level of RAID protection – but not down.

NS4600 Disk Formatting

NS4600 Disk Formatting

The NS4600 has the concept of spares as opposed to free (unused) disks.  A hot spare can be used to dynamically rebuild a failed RAID group.  The screenshot shows a rebuild in place for a failed drive.  In this instance I’d left drive 4 as the hot spare and pulled drive 1 to simulate a failure.  The NS4600 automatically kicks off the rebuild and spare drive becomes part of the RAID group.  Now we see the second issue with the use of traditional RAID systems.  I simulated this RAID failure on RAID groups containing no data, yet the rebuild took hours due to the nature of the rebuild – a physical

NS4600 Raid Rebuild

NS4600 Raid Rebuild

drive recovery.  Contrast this with more progressive RAID systems where only the active data is copied, significantly reducing recovery times.  Of course the trade-off here is whether in a home system you would be rebuilding on a regular basis.  Chances are you wouldn’t but if a failure did occur, you would want it to occur as quickly as possible.  For the record, the total rebuild of a 2TB drive in a mirrored RAID-1 pair took 12 hours to complete with no other workload on the device.

In the next post I’ll look at the logical level of file systems and iSCSI LUNs.  Comments always welcome as usual.

Disclaimer: Promise have provided me with two NS4600 devices for this review.  These devices will be returned at the end of this period.  This is an independent review and has not been sponsored or paid for by Promise.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×