Home | Featured | Enterprise Computing: The Benefits of Wide Striping – Avoiding A Long Tail

Enterprise Computing: The Benefits of Wide Striping – Avoiding A Long Tail

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

IOPS Per RAID Group, ordered by most to least


The chart shows that in some array designs (typically the older Enterprise arrays), I/O distribution was not evenly balanced and so not all drives were being used to their full capacity.  This was mitigated by using tools to move LUNs or sub-LUNs around; alternatively concatenated devices like metas and LUSEs were employed to spread the load.

The only real solution to the I/O balancing problem is genuine wide striping.  Manual or even automated rebalancing, or the use of metas are just workarounds.  Once wide striping is in place, either more work can be performed or the number of spindles or their “quality” can be reduced, i.e. you can build a complete SATA array like XIV.

There are of course disadvantages to having your data more widely spread.  The most obvious is the increased risk of data loss when the RAID system fails – i.e. a double disk failure.  The wider the striping, the wider the impact.  The tradeoff is the benefit of increased performance.  You have to choose what level of risk/impact you consider acceptable versus the potential gains.

If you’re not doing wide striping today then you should seriously be considering it.  After all, you’re only harnessing performance capacity within the array that you’ve already paid for.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://www.storagegumbo.com John Dias

    Imagine if you could wide stripe using a mix of drive types instead of just SATA like XIV. Now imagine that your array could make sure the writes used only the fastest disks (FC 15K or SSD)in the stripe and moved older, infrequently accessed data (reads) to the slower, more cost efficient SATA drives?

    You’d be a Compellent Storage Center customer! 🙂

  • Ravi


    Can you comment on the sun open storage use of wide stripes.

    From the whitepaper :



    Triple parity RAID, wide stripes — a RAID configuration where each stripe has three
    disks for parity, and where wide stripes are configured to maximize capacity. This
    configuration will yield high capacity, and high availability as data will remain
    available even after sustaining three disk failures. The availability and capacity are
    delivered at the expense of performance as this mode requires more calculations
    than double parity RAID. Also, while bandwidth will be acceptable in this wide stripe
    configuration, the number of I/O operations that the entire system can perform will
    be diminished. As with other RAID configurations,

  • http://www.davenportgroup.com Paul Clifford

    It is simple math, a 15K drive will generate sustained IOPS of about 170, a SATA drive about 70. If you create a “pool” of these drives and increase the number of spindles working with each IO, the IOPS increase with each drive used. We have been building these powerful performance pools for customers for over 5 years.

    Fast, Simple, and Efficient – that is how a Compellent system was designed.

    Paul Clifford

  • Pingback: Articulos » Bond Productions and Models()

  • Pingback: Articulos » Aumento De La Masa Muscular A Traves Del Entrenamiento De Físico Culturismo()

  • Pingback: Articulos » Cómo Duplicar Tus Ingresos Y Tu Tiempo()

  • Pingback: Articulos » Es Tiempo De Revisar Tus Metas Financieras()

  • Yves Pelster

    @ Paul:
    Paul, how do you come to the conclusion that a SATA drive will do 70 IOPs only ?
    As discussed some time ago in this blog space already, I think a FC or SAS Drive can be estimated at roughly 185-200 IOPs, and a SATA Drive at about 100…

    Apart from that: do you Compellent Guys do anything else that sailing in XIV’s wake in blogs and forums ? 🙂

    (Don’t get me wrong – I think both technologies are well worth a closer look…)

  • http://storage.firsttech.com joel


    Yves maybe I can help out. You’re right technically you can run the disks to a much higher IOPs level. However you’ll begin to see a substantial increase in latency. This trade off of squeezing very high IOPS out of drives can wreak havoc on database/application systems.

    I agree with Paul the IOPs numbers to use for production configuration design are 70-80 for SATA and 170-180 for FC.

    My clients love their Compellent SANs because they deliver with out surprises.

    Joel Carlson

  • http://www.bestfinance-blog.com DesireeMathis31

    Don’t you understand that this is the best time to receive the mortgage loans, which can realize your dreams.

  • Pingback: Storage Arrays Do A Few Things Very Well – @SFoskett – Stephen Foskett, Pack Rat()

  • Florian Heigl

    Sun/Oracle and others already had one round of chassis with small format flash and it didn’t go so well. I think that’s one of the reasons why this isn’t “moving” yet.
    The other thing is heat. I have done a few test runs with the 950pro NVMe drives.
    As things are now it’s not really working for any serious load, you’d need to remove the samsung stickers and replace them with heatsinks. I am not joking.

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×