NS4600 SS#9 – RAID Rebuild
NS4600 Raid Rebuild

NS4600 SS#9 – RAID Rebuild

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://www.oortcloudcomputing.com/ Tim Wessels

    Well, the future of object storage data protection lies with erasure coding data. It is true that dispersing erasure coded data over multiple data centers in a region is problematic from a data protection and performance perspective. Erasure coding data in one data center and then replicating the erasure coded data to another data center is a better solution and one that Cloudian has implemented.

    Another approach is based on hierarchical erasure coding, which safely disperses erasure coded data over multiple data centers, but I’m not aware of anyone who actually does that today in production. Mostly I’ve read about hierarchical erasure coding in academic papers. Maybe you can let us know if any OBS vendor has actually implemented this in practice. I think IBM Cleversafe might have this capability as they only erasure code data and they do support multiple data centers. Cleversafe also amassed hundreds of U.S. patents prior to being acquired by IBM in November 2015, so there may be indications of having implemented this capability in their patent filings.

  • Glen Olsen

    Chris, great information here, although I notice that for Caringo Swarm you state that “Up to three replicas can be specified” whereas In fact Swarm allows for up to 16 replicas

    • http://architecting.it Chris M Evans

      Thanks Glen, I’ll make the amendment. I did quite a bit of research to find out what the maximum was, but 16 never came up. Can you point me to any documentation that shows that figure? Thanks!

      • Glen Olsen

        Hi Chris, Its documented in the Swarm Guide, you can access the guide on connect.caringo.com if you requested/have an account, otherwise let me know and I can provide you a pdf copy. From the Swarm Guide:

        ====
        policy.replicas

        min:2 max: 16 default:2 anchored

        The min, max, and default replicas allowed for objects in this cluster. SNMP name: policyReplicas
        ====

        • http://architecting.it Chris M Evans

          Just requested an account thanks. Will let you know if I need a PDF (hopefully the account will be approved and I won’t).