The Rebirth of Parallel I/O – Forbes Blog by John Webster

By , Thursday, December 10th 2015

Categories: Analyst Blogs

Tags: blog, datacore, i.o, John Webster, parallel i/o,

The transition from a dependence on rotating disk to solid-state storage is under way. And as the cost per unit of solid state storage capacity inevitably decreases, enterprise IT is now getting the message that an investment in solid state storage drives more revenue generating transactions every business day.

However, improving the performance of the storage media—which the current trend of replacing of disk with flash essentially does—only addresses one aspect of the I/O “stack” and storage performance in general. Another approach streamlines the I/O path by eliminating unnecessary processes. The coming adoption of the NMVe standard is an example. A third and perhaps more historically basic approach is I/O parallelization–based on work done decades ago but made highly relevant now by the advance of multicore processors.

The commercial introduction of parallel processing technology in the 1980’s by start-ups that included Thinking Machines, Sequent, Pyramid, Encore, MasPar and nCUBE advanced the notion of I/O parallelization. All were based on a simple computing principle: a workload can be executed faster when the computing tasks are spread across multiple CPUs (parallelized) and run simultaneously than on a single processor that completes instructions sequentially. I/O parallelization removed a huge I/O bottleneck for these systems and kept them operating at optimum performance levels by feeding them in parallel with multiple streams of data. However, their programming model was complex at best and all of those companies either failed or were acquired by 2001. I/O parallelization went into hibernation.

Fast forward to today. To overcome the fact that frequency scaling of a single processor can no longer yield the significant performance gains predicted by Moore’s law, chip vendors such as AMD and Intel now offer multiple CPUs (cores) on a single chip. Parallelism in commercial, general purpose computing has returned to keep Moore’s law essentially intact. This time though, the programming model is easily accessible and the price is a tiny fraction of the early MPP systems of the past. But what about I/O for multicore processors? Can that be parallelized as well? And if so, what will be the impact on performance?

Parallel I/O A company called DataCore Software that evolved from its beginning as Encore Computer, one of the acquired MPP start-ups, has now modernized this approach. DataCore recognizes that the shift from single to multicore processors creates an opportunity to improve storage performance by parallelizing I/O across cores in multicore processing systems.

Using I/O parallelization to take greater advantage of the processing power offered by today’s multicore servers may not seem on the surface to offer dramatic acceleration of storage performance. A set of SPC-1 benchmarks just released by DataCore indicates otherwise.

SPC-1 is a highly regarded standard for measuring the performance of general purpose arrays that is administered by the Storage Performance Council, a vendor-neutral industry standards body focused on the storage industry that audits and publishes benchmark results. DataCore has now published the results of SPC-1 benchmark runs using its SANsymphony-V software as the storage platform. But rather than running the test on a server with external storage attached via a SAN, DataCore chose to conduct the benchmark test in a hyper-converged[1] systems environment where server, networking, and storage are integrated into a single system.

Full results from this reported and SPC-validated benchmark run can be found here but are summarized as follows:

SPC-1 IOPS™ – 459,290.87 (maximum I/O Request Throughput at 100% test load using RAID-1 mirroring)

SPC-1 Price-Performance – $0.08/SPC-1 IOPS

With these results, DataCore demonstrated the use of hyper-converged storage to set a new SPC-1 Price-Performance record—3X better than the previous record by an external array vendor. I believe that the significance of this result cannot be overstated. The storage industry is currently fixated on delivering performance by using solid state devices, either in the form of hybrid arrays (flash plus disk) or all-flash arrays. DataCore is the first storage vendor to propose a solution that increases storage performance and overall processing efficiency simply by taking advantage of the fact that most if not all new servers are based on multicore processors and parallelizing I/O on a per core basis to make far more efficient use of their available processing cycles.

[1] “Hyper-converged” refers to the combining storage, compute and networking components into the same server chassis. While frequently sold as turnkey appliances, hyper-converged solutions can also be assembled by the end user or an integrator from independent hardware and software components using the Open Storage Platform model.

Read the blog on Forbes.Com here.

Forgot your password? Reset it here.