BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

IBM Signs $325 Million Supercomputing Deal With Dept. Of Energy

This article is more than 9 years old.

IBM may have its growth issues, but Big Blue can still move big iron. Today the Armonk, N.Y. computing giant announced a $325 million deal to supply the U.S. Department of Energy with a new kind of supercomputer that will move data far faster and more efficiently than competing hardware systems. The supercomputers, nicknamed Sierra and Summit, will go online in 2017 and 2018 at the Lawrence Livermore and Oak Ridge National Laboratories, already home to some of the world’s fastest supercomputers. The DoE also announced today a separate $100 million program to continue developing even faster machines.

IBM worked with graphics chipmaker Nvidia and interconnect manufacturer Mellanox Technologies to produce a new way of connecting IBM's POWER8 central processing units (CPUs) with Nvidia's nimble graphics processing units (GPUs) more directly, moving data between them in both directions at 75 gigabytes per second, the equivalent of 100 billion Facebook photos per second. Conventional supercomputers use a connection protocol that moves data at 16 gigabytes per second, and requires a lot of processing and exporting of data between CPU and GPU. The new interconnection technology, called NVLink, does away with all that extra processing.

But in a world drowning in Big Data, the real play here is avoiding having to move that much data in the first place. Sophisticated data-crunchers such as oil and gas companies and nuclear research labs can sometimes be working on datasets so large (such as modeling the next two years of weather) that it can take more than a year to get an answer. The burden of moving that much data back and forth from memory and storage to the CPUs can consume up to 90% of the computing resources and waste a ton of energy.

IBM engineers came up something they're calling a "data-centric" approach, in which processors are physically deployed where the data resides, either in attached storage or in various levels of server memory. Operations like sorting, compression and Java acceleration can now be done in parallel all across the data center, sending back answers or solutions rather than huge reams of raw data. “Think of it as paring the flow, diverting rivers into streams or trickles,” says IBM’s Dave Turek. He estimates that the new Power systems will reduce data movement by 50%.

The $320 million deal is a validation of sorts of IBM's recent strategic shift to open up its POWER computing architecture for license and use by other companies looking for alternatives to an Intel -controlled world. In IBM's case it was more about making a virtue out of a necessity because POWER was falling behind in the computing arm's race. Chip licensing has worked very well for ARM Holdings in the mobile world. ARM's chip architecture now has roughly 90% share of mobile microprocessors with a highly profitable stream of license fees coming from each of those billions of handsets sold every year. IBM decided it was better to widen the pool of POWER users rather than control a shrinking pie. IBM has already gotten out of competing with Intel in lower-end servers by selling that business to Lenovo for $2.1 billion in September, and was unable to achieve any growth in its own higher-end POWER server business. It also unloaded its POWER chip manufacturing business to Global Foundries earlier this year and now simply buys its POWER chips from GF.