Home | Featured | Enterprise Computing: Run My Storage At 60%? No Way!

Enterprise Computing: Run My Storage At 60%? No Way!

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

Hu Yoshida has an interesting view on his recent post discussing storage utilisation rates.  His concluding remark suggests running at a maximum of 60% utilisation – even with Dynamic Provisioning.  Hu, you must be joking, right?

Point 1: I’ve paid for my 100% of storage and I’m going to use it.  I don’t remember any vendor suggesting only paying 60% of their invoice and calling it quits.  Granted, spending an inordinate amount of time to reach the 80%+ goal isn’t necessarily cost effective, however it can be achieved.

Point 2: It’s perfectly possible to run at 80% utilisation.  It just takes some thought and planning.  As Hu rightly points out, Dynamic Provisioning helps significantly towards this.  The features of wide striping and thin provisioning mean storage can be more easily provisioned and requires less manual balancing.

Point 3: Achieving high utilisation isn’t all about technology.  It’s also about process.  That means Demand Planning, Capacity Planning, efficient processes for deployment of new hardware and for provisioning of customer requests.  All this is possible without incurring additional expense.

So, don’t think 80%+ isn’t an achievable target.  After all, it’s your money you’re wasting if you don’t try!

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://blog.fosketts.net Stephen Foskett

    I hear some cloud storage providers run way over 60% utilization. Must be something about the homogenous architecture and object protocols…

  • http://www.cmg.org Storage Scrutiny


    What’s more important and more expensive to provision, storage capacity or storage performance?

    Clearly it’s the latter. Capacity problems are relatively cheap to solve — rearchitecting to solve storage performance problems costs orders of magnitude more.

    Ever since the 1980’s we’ve had zone-based disk formatting — because of constant linear bit density, roughly 2/3rds of HDD capacity lives on the outer 50% of the disk stroke.

    Using that last 33% of disk capacity entails nearly doubling the average seek time — a huge performance penalty.

    IOPS are more valuable than TBytes. Cramming to 80% or above is penny-wise and performance foolish.

  • http://www.brookend.com Chris Evans

    I think you’re missing a number of things here. Firstly, we don’t read and write directly to disk, so disk physical performance is masked/improved by cache and all the other components in place. We’re not still using 3380’s you know.

    Second, 100% of data isn’t always active. In fact, i showed in a recent post a typical array of mixed workload doing 75% of I/O with 25% of the data. This means the majority of data is static, so why shouldn’t we therefore utilise all of that capacity in place of performance?

    There will be specific instances where performance is an absolute requirement and then that can be designed for. However in general usage storage arrays, I see no reason not to use 80% or more of the available capacity.


  • Yves Pelster

    I keep wondering why everybody is talking about performance all the time. Most Customers I have had the pleasure of measuring have massively over-estimated their IOPS requirements.
    At the same time, performance issues have regularly been due to to small a number of spindles allocated to the application affected.

    My answer to that is simple and well-known: Wide striping !

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×