Home | Opinion | Why The Software Defined Data Centre May Be Years Away
Why The Software Defined Data Centre May Be Years Away

Why The Software Defined Data Centre May Be Years Away

23 Flares Twitter 0 Facebook 6 Google+ 0 StumbleUpon 0 Buffer 11 LinkedIn 6 Filament.io 23 Flares ×

This week the news has been dominated by VMware and VMworld 2013.  The major theme has been “Software Defined”, in particular moving to a Software Defined Data Centre (SDDC).  The concept of SDDC is a noble one and certainly a target we should be looking to strive towards.  However in reality I believe it’s a while away.  There are a number of reasons:

Financial

Today’s enterprise data centres have a lot of money invested in technology.  At the simplest level this means servers, storage and networking equipment.  Much of this may not be fully amortised or written down and so needs to be kept deployed for some time.  There’s a normal cycle of technology refresh that occurs on 3, 4 or 5 year periods and in some cases more, where transformation is too expensive or impractical due to application re-writes or unsupported new hardware/software.  Remember hardware isn’t just the obvious server and storage.  Large enterprises have invested in things like specific fibre technology, which is one of the reasons the adoption of FCoE hasn’t been particularly successful.  Although companies like Cloud Velocity can help aid the migration into the cloud without application changes, there’s a question still to be answered about the cost benefit of this.  Is it even worth spending the time moving existing applications onto cloud-based infrastructure (either on or off premises).

Technical

Many applications today simply aren’t suited to SDDC deployments and were architected in a client-server world, or dare we even mention it, for the mainframe.  Larger organisations have dozens of legacy applications that have been retained for a variety of reasons.  I’ve worked in places where the original source code was lost and programs and their operating systems were kept running because no-one knew how to re-write the application.  There are many other technical intricacies to think of too, like server co-location – keeping the application and database layers close together for low latency; networking implications; the BC/DR process (maintaining application consistency during partial or complete infrastructure failures) and many more.

The technical aspect must also reflect the attitude of vendors.  Some vendors have been slow to provide suitable licensing or support of their platforms and applications on virtual infrastructure.  Even VMware licensing has been changed over time, creating some FUD around ongoing costs.

Social

If there’s one thing I’ve learned over the last quarter of a century in IT, it’s the inertia of change. When new technology comes along, 20% will never get it, 20% will get it immediately (early adopters) with the remaining 60% eventually converting over time.  People fear change because not everyone is good at continuous learning or wants to be in a constantly evolving learning process.  The result is that the adoption of new hardware technology, new software, new development techniques requires training, recruitment, changes to process and most of all, buy in from senior management.

Risk

Buy-in brings us to the last point; risk.  “If it ain’t broke, don’t fix it” is a mantra that has worked well for many years.  Any system change introduces risk, which means careful risk measurement then mitigation.  Many IT departments have difficulty measuring risk and so avoid change as a result.  For example, how to I measure the risk of deploying tier 1 applications on  a virtual infrastructure like vSphere.  Ask that question say, 8 years ago and the answer would have been to avoid virtualisation altogether.  Ask that question today, and it’s a no-brainer.  All applications can go on vSphere because the ecosystem (provisioning, backup, management, reporting) is there to support every type of application, plus there’s years of accumulated knowledge in the people and the community.

The Architect’s View

So where does that leave us? The SDDC is a transformation of today’s IT landscape.  Applications that are simple to understand and encapsulate will be the ones moving first, in exactly the same way that those applications are suitable for cloud deployments.  IT departments will evolve their skills to understand risk, change processes and as projects refresh their technology and as budget allows, we will see a gradual migration into SDDC.  CxO’s should have a strategy to move to SDDC and address all the issues with making that change, whilst being able to quantify the benefits the change will bring.  Get the financial message right, and everything else will follow.

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).

Copyright (c) 2013 – Brookend Ltd, first published on http://architecting.it, do not reproduce without permission.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Pingback: Why The Software Defined Data Centre May Be Years Away | Storage CH Blog()

  • Dominic Wellington

    I agree with all your points. In point of fact, most of these apply to any New !Shiny! Thing in IT, and are not specific to the new mantra of Software Defined Anything. Enterprise IT in particular is extremely risk-averse, and for good reason. However, I would argue that the speed requirement of IT is different. While even ten years ago, as with your example of the adoption of virtualization for key applications, it was possible for IT departments to Just Say No, any IT shop that tries that today gets interpreted as censorship and routed around by the users.

    This means that we need some sort of way to bridge the gap. Legacy IT is there, it works, it’s mostly paid off, it’s well understood and easy to hire for. This means that the thing to do is look for technologies that can build on that base and provide at least some of the benefits of the Software Defined future, but on today’s infrastructure. Instead of waiting for NSX to be fully rolled out, look at network configuration management. Instead of trying to standardize on one hypervisor or public cloud, look for a CMP that can talk to all of them and to physical too while you’re at it – and not just on x86, either. Mainframes are still around today, so Unix boxes are going to be with us for a while yet.

    We hear a lot about ITIL not applying in the cloud or being a road block, but that’s when ITIL is enforced by cracking the whip on humans. People who’ve tried cloud without some sort of control (which, admittedly, doesn’t have to be ITIL) have burnt their fingers quite severely. If you already have ITIL expertise, why not try to get cloud under that umbrella of control?

    The key in all of these cases is automation, but that is where the cultural change is hardest. Many IT people associate automation with loss of job security, but I would argue that if that is a factor, you’re doing it wrong. There’s a famous ThinkGeek shirt that reads “Go away or I will replace you with a very small shell script”. This is meant to be mocking (l)users; if your own job can easily be replaced that way, you should hand in your geek card.

    • Chris M Evans

      Dominic

      It’s interesting that many of your points boil down to the social angle – process, going around IT, etc. Perhaps the human aspect is more prevalent than we think and that should be given more consideration, even to the extent of being the first thing on any SDx journey?

      Cheers
      Chris

    • http://thestoragearchitect.com/ Chris M Evans

      Dominic

      It’s interesting that many of your points boil down to the social angle – process, going around IT, etc. Perhaps the human aspect is more prevalent than we think and that should be given more consideration, even to the extent of being the first thing on any SDx journey?

      Cheers
      Chris

      • Dominic Wellington

        EXACTLY. Treating cloud as a purely IT project fails. Cloud projects succeed inasmuch as users adopt them. We have already seen all too many cloud projects fail despite being technically successful.

        Conversely, you unlock the value of cloud if you can take IT people out of the delivery loop. This breaks the tension between IT as the “Department of No” and users trying to get around that block, because IT is no longer the bottleneck and individual user requests no longer need to clear that high bar of justifying high-touch IT support for the duration. The IT team can focus on value-added tasks (and of course the never-ending maintenance) and users can just get on with their own jobs.

        Technology is needed to enable this model to work, but you start from the goal, not from the technology available.

  • Pingback: VMworld 2013 – Is it just me? | www.vExperienced.co.uk()

23 Flares Twitter 0 Facebook 6 Google+ 0 StumbleUpon 0 Buffer 11 LinkedIn 6 Filament.io 23 Flares ×