Home | Cloud | HP Discover Review: The Machine
HP Discover Review: The Machine

HP Discover Review: The Machine

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

This is one of a series of posts covering technology announced or discussed at HP Discover 2014 in Barcelona, Spain.

Disclaimer:  I was personally invited to attend HP Discover, with HP covering my travel and accommodation costs.  However I was not compensated for my time.  I am not required to blog on any content; blog posts are not edited or reviewed by HP before publication.

One of the regular features of HP Discover over recent years has been the blogger “Coffee Talks”, informal sessions with various teams from within HP, covering Big Data, Storage, Networking and so on.  There is even a Coffee Talk on printers that was apparently quite enlightening, although I didn’t attend that one.  Each year (or twice yearly for those who make it), we have a presentation from HP Labs, the section of the company focused on new innovations.  One new technology emerging from the Labs is something HP refer to as “The Machine”.  Although the name may conjure up an image of something more sinister akin to Skynet, The Machine is in fact HP’s answer to rack-scale computing, also called Rack Scale Architecture (RSA) by Intel.

Three Components

The three major components of The Machine are summarised by HP’s Martin Fink in a video presentation at HP Discover in Las Vegas earlier this year.  These are compute (electrons), communicate (photons) and store (ions).  Translating this to actual technology, this means more use of technology like Moonshot processors for compute, photonics (light-based communication) for the data and communications layer and memristors for the storage of data.  These  three components represent the disaggregation of today’s computer architecture designs that place compute close to system memory and spend lots of time moving data around (in the form of electrons) between the processor, DRAM and permanent storage.

Components of The Machine

Components of The Machine

Hardware Paradigm Shift

Why do we even need to change architectures?  Well, HP will tell us that today’s computing is inefficient.  This is a sentiment we can all agree with; processors, memory, hard drives and other components are not particularly efficient and great efforts have to be made to cool high performance components.  Water cooling, a standard design in the mid-1980’s is making a comeback  because air flows can’t move heat away fast enough.  As we scale up to ever greater computing demands, there is a concern that globally we won’t be able to sustain the power needed to drive our future computing infrastructure needs.  So something needs to change.  Compute needs to become more efficient.

Moonshot Benefits

Moonshot Benefits

The Moonshot platform is one way HP is moving towards more efficient computing.  HP is claiming significant savings from using Moonshot “special purpose cores”, entire systems on a server daughterboard style design that are targeted at specific workloads (for example media transcoding).  I didn’t get the benefit of Moonshot at the outset and I still think that there’s a need for more customised software to make the hardware architecture more effective, however more on that subject in a moment.

Universal Memory

The arrival of NVDIMM technology over the last 18-24 months has demonstrated the need and benefits of moving compute and data closer together.  In fact the aim is to keep the working set (the active data) together with compute and with technologies like Hadoop the additional focus is to move the compute to the data and not the other way around.  The reason for this is quite simple; latency.  Moving data between system memory and primary storage takes time, which could be used for more processing.

Universal Memory

Universal Memory

HP are promoting the idea of a concept called Universal Memory that collapses today’s memory hierarchy.  Instead all data would be accessible through a new architecture based on photonics, using light as the transport rather than electrons.  HP claim this technology would see an energy reduction of over 1000-fold and connection speeds of around 6 terabits per second.  In discussion at HP Discover, Martin Fink discussed a 160PB rack of storage that was byte-addressable in under 250 nanoseconds.  If this level of performance could be achieved, then removing the electrical issues of having processors & memory close together could provide the ability to design very different physical architectures than those in use today.

Software Paradigm Shift

Of course if we’re radically changing the hardware architecture, software needs to keep up too.  We’ve spent the last 15 years creating encapsulated virtual objects representing the hardware we used to deploy.  The underlying concept of server-based computing didn’t change, we just virtualised it.  With The Machine, HP are already talking about re-designing the operating system and making all of the software they develop Open Source.  Many features of our current Operating System (like address spaces, memory management, I/O subsystems) are focused on the hardware and in many cases designed to overcome their limitations.  As some of those limitations fall away, a new approach to Operating System design will be needed.  Assuming HP can make the hardware work, the software components will be one of the most interesting pieces to watch.  Expect more on this in 2015.

The Architect’s View

The Machine represents HP’s view on the idea of Rack Scale Computing.  Unlike the ideas I’ve seen from Intel, HP have some additional potential benefits if and when they get memristors working efficiently.  So what does HP then do with this technology?  Well, in the background to all the discussions on new hardware is HP’s public cloud, Helion.  Today that will be comprised of standard components, however in the future, HP and all the other public cloud providers will need ways to make applications run more efficiently (and so more cheaply) in their data centres.  Getting the hardware right is the key building block onto which the software can be layered.  When and as that happens, IaaS (Infrastructure as a Service) will start to decline as we move towards PaaS and we eliminate the artificial construct of the virtual machine.

Related Links

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).

Copyright (c) 2009-2014 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×