BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

HPE Switches On 'The Machine' To Change Data Center Architecture

This article is more than 7 years old.

The big news at HPE Discover London 2016 was not that HPE isn’t planning to productize its architectural prototype of “The Machine”. That was never something HPE planned to do. The news was that HPE turned on The Machine just two weeks prior to HPE Discover London. HPE’s research and development (R&D) testbed for The Machine architecture is now operational.

Why is The Machine important?

Think of The Machine as a working lab for re-inventing data center hardware and software architecture. The Machine is designed to morph over time as new data center technology is developed. The potential upside performance of new memory and network technologies that this initial R&D testbed intends to address are not well characterized yet. New network interconnect tech, new memory tech, new storage tech, new compute tech – all of this is modularized in The Machine. The Machine’s fabric-based interconnect architecture was designed from the start to enable experimentation with the other technology vectors. HPE is being humble in admitting that it does not have all the answers to future architectural decisions.

HPE's working compute and memory sled for The Machine [photo: TIRIAS Research]

HPE is leading a shift in computing economics with The Machine. Compute resources used to be the most expensive resource in a system, so architecture was designed to move data from storage to the compute resource. We call this “compute centric” computing. Non-volatile memory and photonics based networking will soon be economically feasible for mainstream data center use. New “data centric” architectures will implement very large memory pools that are more expensive than the processors attached to those pools. Data centric architecture must bring compute into the storage or memory matrix to minimized data movement.

HPE has a deep R&D collaboration with WD to develop next generation memory technologies and architecture. However, the Machine is flexible. HPE said that if Intel’s Optane memory beats HPE/WD’s new memory into mass production and if Intel makes Optane available using standard DDR4 memory interfaces, then HPE will work with Optane in this generation of The Machine’s hardware. Optane is based on Intel’s 3D XPoint non-volatile memory technology.

HPE is also investing in its own multimode fiber optic networking R&D. HPE demonstrated vertical-cavity surface-emitting laser (VCSEL) based multimode 100Gbps single-fiber data transfer rates using four modes of 25Gbps each, which are a critical component of The Machine’s R&D testbed. HPE also demonstrated test chips for silicon photonics (SiPh)-based versions intended to further reduce the cost of optical networking in the future.

For comparison, Intel finally has SiPh-based optical cables on the market, but they only use single mode fibers at 25Gbps. Intel must aggregate the bandwidth of four fibers to achieve 100Gbps bandwidth (each direction). Where Intel uses eight fibers for 100Gbps bandwidth in both directions, HPE multimode fiber technology will only use two fibers to achieve the same bandwidth.

The Machine is an R&D test bed to improve the performance of photonics-based networking using high-bandwidth switching throughout a rack, and to potentially find different balance points of storage-class memory and compute for different workloads. HPE researchers can adjust The Machine’s mix of new components and new architecture ideas at the same time.

You can read a deep dive about the technologies HPE has designed into its operational The Machine R&D testbed here.

I read HPE’s intent with The Machine as a continuation of HPE’s long HP Labs history of developing new technologies and architectural innovations, even if they chose not to bring some of those technologies and innovations to market. With Intel backing off their Rack Scale Architecture (RSA) hardware prescriptions in favor open sourcing Rack Scale Design (RSD) as a software rack scale management initiative, HPE is in a good position as an appealing technology partner for x86 and non-x86 processor architectures. HPE is already using an ARM-based compute node in The Machine, and there would be no technology barriers for HPE to implement OpenPOWER compute nodes or GPU accelerated compute nodes in the future. AMD and IBM are both Gen-Z consortium members, as is ARM.

HPE is investing in its own data center architecture R&D, they are proud of it and they want to be recognized as a top architectural partner for advanced data center tech.

-- The author and members of the TIRIAS Research staff do not hold equity positions in any of the companies mentioned. TIRIAS Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud.

Follow me on Twitter or LinkedInCheck out my website