Home | Featured | Enterprise Computing 4 Pillars – Billing for Tiered Storage

Enterprise Computing 4 Pillars – Billing for Tiered Storage

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

This is a series of posts covering Storage Management.  Previous related posts:

With relatively simple storage arrays, the process of charging for utilisation is a straightforward one.  For instance, in an array that has all the same disk types, usage can be charged by the proportionate use of the storage itself.  Imagine a 100TB array with an effective cost of £100,000.  This simply equates to a cost of £1000/TB; any user can be charged on that ratio basis.  As tiering is introduced, things get more complex; an array may consist of 20% tier 1 storage and 80% tier 2.  In this instance the cost of the individual tiers is required in order to charge accurately for the tier 1 & tier 2 components.  A more detailed understanding of the array cost is required, in order to separate the disk costs from the chassis and other components.  The cost of delivering each tier can then be based on the tier disk cost plus a share of the other component costs.

Obviously this is a simplistic argument and the charging model can be taken to a higher degree of granularity, charging for port usage, cache and so on.  This is where a billing rather than chargeback model becomes useful, as it isn’t directly related to the cost of delivering the service at the component level.

This is all pretty clear and unambiguous, but what happens when dynamic tiering is introduced?  It’s now possible to move not just whole LUNs, but blocks of data between tiers for any individual LUN.  Compellent do it; Hitachi now do it with VSP as do other manufacturers.  Data location is no longer static, but continuously variable, and the picture of storage will be different every time a snapshot is taken.  How should billing be implemented in this scenario?

It could be possible to take a snapshot of utilisation every day and over the course of say, a month, average out the utlisation to create a charge per LUN (e.g. LUN X has 10% tier 1, 80% tier 2 and 10% tier 3), then add up all the LUN utilisation for a single user.  That seems like a very large amount of work for what is little gain.  Instead we’re back to the concept of describing tiers as a service offering rather than as technology and ensuring the user receives storage within their service level agreement.  The hardware used to deliver that storage can then be tweaked over time to meet those SLAs, by adding more capacity or high performance disks as required.

Creating a service definition of tiers and then deploying infrastructure to meet it has benefits in a number of ways:

  • Tier costs can be fixed, so the user doesn’t get unexpectedly large or variable bills each month.  Costs are consistent and easy to understand.
  • Technology can be added to meet SLA demand; if SLAs are likely to be breached in performance terms, more SSD could be added to an array, for instance.  The tools exist today to monitor arrays to this level.
  • Complex billing is not required.  The simple process of looking at and charging for capacity by tier can be retained.

In some ways, multi-tier LUN billing is similar to billing for thin provisioned LUNs; the absolute utilisation can be variable but the LUN size stays consistent.  There is a greater emphasis on understanding data within thin environments, to ensure growth doesn’t exceed capacity, especially where over-provisioning has been used.  Similarly these processes can be applied to dynamically tiered environments.

There are a few other points to consider:

  • Non-profit. For accounting purposes, billing may need to operate in a non-profit regime within an organisation.  In a fixed service tier model, the charging process may result in either a notional profit or loss situation at the end of the year.  As long as this notional gain or loss can be carried forward to the following year, then there should be no issue ensuring billing operates on a non-profit basis.  Hopefully a surplus will result in slightly lower bills the following year.
  • Static Deployments. Some organisations choose to deploy arrays fully populated.  This may be done to reduce the number of outages on equipment, reduce the activity in the data centre or just cope better with growth.  Dynamic tiering will present a challenge for these organsations, as less benefit will be realised from static configurations that have data added to them over time.  It will be very difficult to accurately predict the right technology mix ahead of time.


Dynamic tiering presents different challenges in billing for resources. It emphasises even more the need to separate service definitions from technology and to deploy technology to meet those requirements.  If done correctly, dynamic tiering can reduce costs by more finely tuning the specific storage tiers required to deliver a storage service.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Pingback: Tweets that mention The Storage Architect » Blog Archive » Enterprise Computing 4 Pillars – Billing for Tiered Storage -- Topsy.com()

  • Pingback: Back From the Pile: Interesting Links, October 22, 2010 – Stephen Foskett, Pack Rat()

  • Pingback: Commodity Futures: Does convenience yield and storage costs add or subtract to the futures value? | Futures Market Quotes()

  • Pingback: Are Apple’s iPhone and iPad relevant to enterprises? | IT Security, Hacking, Vulnerability alerts, IT Leadership and more()

  • Pingback: Red Hat announces Enterprise Linux 6 | IT Security, Hacking, Vulnerability alerts, IT Leadership and more()

  • Travis

    Very useful summary. Would be easier to understand & apply if it included more use cases.

    • http://architecting.it Chris M Evans

      Travis, agreed. I am following up with vendors mentioned to get some more concrete examples. I’ve seen a few being mentioned, but some additional detail, I’m sure, would be useful.

  • http://www.oortcloudcomputing.com/ Tim Wessels

    Well, I agree that in one form or another, OBS vendors have attempted to build-in or add-on file access protocol support to their object storage clusters. While the AWS S3 API is supported in whole or part by every OBS software vendor, there seems to be a fair amount of variation in how vendors provide “file-on-object” support for customers whose applications rely on legacy file access methods like SMB and NFS.

    I recall reading a comment from a storage industry pundit that object storage has hit a wall and this will sideline it to niche market status. I don’t agree because it would mean that the growth in data itself would be ending or declining and AFAIK no one is proclaiming the end of data growth. I will grant that the one major thing OBS vendors uniformly lack is lots of customers. Most OBS vendors don’t have enough customers because in my opinion, they spent too much time explaining what OBS is when most customers don’t need to fully understand how it works. If customers have pain points around data growth, storage, and management, they want solutions that will integrate with their current workflows.

    This is why support for “file-on-object” is now being “re-introduced” or “improved” by OBS vendors. One OBS vendor (SwiftStack) recently proclaimed that they are no longer an OBS software vendor. Well, you have to give them credit for trying out that marketing message. The point is every OBS vendor who has been flogging the technology minutia of their OBS software has wasted a lot of time doing that in front of potential customers who really have more important problems to solve. Don’t get me wrong, OBS has to work properly and every vendor needs to meet certain requirements and have certain features (table stakes) in order to be considered by a potential customer. But in the end storage solutions that integrate will be preferred over storage solutions that don’t do that so well. Time for OBS vendors to bust a move and get down with providing solutions customers will recognize as being beneficial and cost-efficient to their operations.

  • neilwlevine

    The Ceph project recently introduced a File with Object option by offering a FSAL for Ganesha that interacts with the RADOS Gateway (RGW), the component that provides an S3 endpoint. While not as performant as CephFS (the Ceph on Object solution), it is pretty well featured (NFS v3 and v4): http://ceph.com/planet/ceph-rados-gateway-and-nfs/

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×