Follow Datanami:
April 14, 2022

Three Ways to Connect the Dots in a Decentralized Big Data World

There’s no shortage of data in this world. Neither is there a shortage of data-driven business plans. In fact, we are sitting on gluts of both. So why are companies still struggling to get the right data in front of the right people at the right time? One of the big challenges, sources say, is melding established data access and data management patterns with the new decentralized data paradigm. Here are three ways to do it.

1. Better Data Automation

That familiar urge to centralize data is falling by the wayside as the volumes of data continue to pile up. That represents a massive reversal of trends, according to Sean Knapp, the CEO and founder of Ascend.io.

“Five to 10 years ago, there was a very strong push to consolidate data, consolidate it into your late, consolidate it into your warehouse,” Knapp said during yesterday’s Data Automation Summit, which continues today. “And we’re starting to see those trends change. We’re starting to see that organizations are embracing silos….embracing the fact that they cannot consolidate all of their data and there is no one platform at the data insurer layer to suit them all.”

While we’re moving away from data centralization, that doesn’t mean we can say goodbye to ETL. Ascend.io sells tools to automate the creation and management of data pipelines, which are proliferating at a furious clip at the moment, as data engineers seek to connect the various silos to enable data analysts and data scientists to get their data work done.

Knapp wants to improve the state of that art, and help automate the low-level muck that many data engineers are living with on a daily basis.

Automation of ETL/ELT pipelines is one way to tackle the growth of big decentralized data (Agor2012/Shutterstock)

“The world of data has just grown too fast. It is like swimming upstream as we watched companies compete over the years, to try and pull all of their data into one spot,” Knapp said. “There will always be multiple data technologies.”

While many companies want to use data in profitable ways, they’re having a hard time turning that desire into reality. Gerrit Katzmaeir, the vice president and general manager for database, data analytics, and Looker at Google Cloud, cited a recent study that found 68% of companies say they’re not getting “lasting value” out of their data investments.

“That’s profoundly interesting,” Katzmaeir said during last week’s rollout of BigLake, the company’s first formal data lakehouse offering, which is slated to go up against lakehouses from Databricks and others.

“Everyone recognizes that they’re going to compete with data,” Katzmaeir said. “And on the other side, we recognize that only a few companies are actually successful with it. So the question is, what is getting in the way of these companies to transform?”

2. Centralizing on the Lakehouse

The answer, Katzmaeir said, lies somewhere in the jurisdiction of three paradigm changes that are currently taking place. First, the data is growing. The generation and storage of data is continuing to explode, and companies are grappling with storing a variety of data types and formats in multiple locations.

Second, the applications are expanding. Companies want to process this data with all sorts of engines and frameworks, and deliver a variety of data products and rich data experiences from it. Lastly, the users are everywhere. Data touches many personas today, including employees, customers, and partners, and the number of use cases for a given piece of data is growing.

The lakehouse concept melds data warehouses and data lakes into a unified whole (ramcreations/Shutterstock)

Even a company as large and technologically advanced as Google seems to realize that it cannot be the unifying force to bring all of its customers’ data back together. With BigLake, it’s melding the previously separate universes of the tried-and-true data warehouse, where structured data reigns supreme, and the looser-but-more-scalable data lake, where semi-structured data is stored.

In a way, the lakehouse architecture seeks to split the difference between the older approach (DWs) and the newer approach (data lakes) and delivering a semblance of data unification that will deliver some salvation from all those pesky data pipelines that keep popping up.

While Google Cloud is arguably the most open of the big three cloud providers–indeed, Google Cloud says it extend into the data lakes of Microsoft Azure and Amazon Web Services and enable it to be accessed with BigLake–not everybody is convinced that a cloud-centric approach ultimately will solve customers’ modern data problems.

3. Global Data Environment

Data automation and lakehouses undoubtedly will help some organizations’ solve their data problems. But there are other big data challenges that won’t be adequately addressed with either of those technologies.

Molly Presley, the senior vice president of marketing for Hammerspace, says some customers with large numbers of unstructured data–such as what is found in science, media, and advertising–may be best suited by adopting what she terms a “global data environment.”

“It’s the concept of ‘I want to be able to make all my data globally available, no matter which storage silo or which storage system or which cloud region it’s sitting in,’” she says.

Being able to scale unstructured data storage broadly in a single name space with full high availability is important, Presley said. But distributed file systems and object systems can already do that. What is really moving the needle now is being able to simplify how users access and manage data, no matter where it sits, no matter what storage environment or protocol it uses, and meeting whatever performance requirements the customer needs.

Hammerspace offers what it calls a global data environment, but it’s mostly for unstructured data (Blue-Planet-Studio/Shutterstock)

“Other environments are saying, ‘Okay, I have NetApp, I have DDN, and I have some object store and I want to aggregate all of that data and make it available to my remote users who don’t have connectivity to the data centers, don’t have connectivity to the clusters, don’t know how to interact with all those different technologies,” Presley tells Datanami.

Hammerspace functions as that global data environment, which can function as a layer sitting atop other data stores, and smooth over the differences, while providing a common management and access layer to unstructured data. The key to Hammerspace’s technology, Presley says, is the metadata.

“So what we’ll do is assimilate the metadata…and now those remote users get local high-performance data access,” she says. “And they only have to interact with one thing, so IT doesn’t have figure out how to make that user connected into all those different technologies.”

While the cloud vendors are solving big data storage and processing challenges with infinitely scalable object storage systems that are completely separated from compute–not to mention the data warehouses and lakehouses that offer a cornucopia of compute options–they still lack visibility into the legacy storage repositories that organization are still running on prem, Presley says. That’s the space that Hammerspace is attacking with its global data environment.

It’s also why Microsoft is partnering with Hammerspace to help its Azure customers get access to large amounts of unstructured data that is still residing in on-prem data centers. Microsoft realizes that not all data and workloads are moving to the cloud, and it tapped Hammerspace to bring that into the cloud fold, Presley says.

“What has changed is people are remote and data is distributed or decentralized–in a cloud data center, five data centers, whatever it is–and the technologies that people are trying to use were designed for a single environment,” she says. “They’re trying to say, ‘Okay, I have all these technologies that were designed over the last 10 or 20 years for a single data center that were adapted a bit to use the cloud but weren’t adapted for multi-region simultaneously with remote users.’ And so they’re scratching their heads going ‘Crud, what am I going to do? How do I put this together?’”

We’ve mostly abandoned the idea that all data must live in a single place. The future of big data looks decidedly decentralized from this point forward. To keep data from becoming a distributed quagmire, there need to be some unifying themes. There’s a multitude of different methods to get there, including data automation, data lakehouses, and global data environment. Undoubtedly, there will be more.

Related Items:

Data Automation Poised to Explode in Popularity, Ascend.io Says

Google Cloud Opens Door to the Lakehouse with BigLake

Hammerspace Hits the Market with Global Parallel File System

Datanami