Data Catalogs Emerge as Strategic Requirement for Data Lakes
If the exhibitors at last week’s Strata + Hadoop World expo are any indication of what’s happening down on the street, data cataloging is evolving from a nice-to-have into a necessity for organizations looking to capitalize on big data.
Hadoop’s so-called “junk drawer” problem has been well-documented. It stems largely from the flexible schema-on-read approach, where data is structured only when it’s finally accessed from the data lake, as opposed to the traditional ETL approach of transforming data when it’s originally loaded into the data warehouse.
In short, getting data into Hadoop is easy, but finding it and getting it back out again can be hard. All sorts of vendors are now looking to address this dilemma, which touches many aspects of big data analytics, including data quality and security. Having a catalog of the data stored in Hadoop seems like a good idea, and there are a number of vendors providing that.
Alex Gorelik, CEO and founder of Waterline Data, which provides data cataloging software for Hadoop and other big data systems, says data professionals are reluctant to open Hadoop to downstream users without a better accounting of the actual data.
“The data lake looks like a flea market,” Gorelik tells Datanami. “It’s all in there somewhere, but how do you find it? It’s a problem for data scientists and data stewards because they can’t give people access until they know what’s in there.”
Gorelik says that while open source tools like Apache Atlas, which is backed by Hortonworks (NASDAQ: HDP), and Cloudera Navigator provide a good technical foundation for addressing data cataloging and master data management (MDM) challenges, they don’t go far enough to solve the problem. Waterline addresses the problem by using “tags” to track the lineage of every piece of data.
With Waterline, Hadoop users can continue ingesting data as they did before, while relying on the software to keep it somewhat organized. Apache Lucene sits under the covers to power searches, while an Amazon-like user interface and “shopping cart” process lets analysts check-out when they’ve found their data.
It’s not a license to be messy with your data, but at least it takes the burden off of users to manually track their data. “People used to have careful directories. But these days, they can’t keep track of their directories,” Gorelik says. “You have millions of files. You should organize them as well as you can. [With Waterline software] it doesn’t matter where the file is, as long as you can find it.”
Collibra is another master data management (MDM) software vendor helping customers keep track of their Hadoop-resident data using the catalog approach. The company, which recently moved its headquarters to New York City, has an eight-year history of providing data governance solution to customers in healthcare, financial services, and other industries.
“What we have is a technology platform that has the capability to keep track of processes around data, the metadata and organization and roles and responsibility for data,” Daniel Sholler, director of product marketing at Collibra, tells Datanami. “We keep track of all the technical connections of all the data because you need to know that stuff. But it turns out that stuff isn’t the interesting stuff.”
Instead, Collibra exposes a set of applications that make it relatively easy for end users to get access to data, if they are authorized to access it. That’s the “interesting” stuff that Sholler was referring to. Data access is one component of a collection of data governance solutions that Collibra is offering, and the scope of that offering will expand in the coming weeks.
Another vendor that’s plying the fruitful waters of data cataloging is Alation. The company originally designed its product to “learn” about data connections by observing how analysts interact. But just providing data cataloging wasn’t enough, the company says. So last week Alation announced that in its version 4.0 update, it will also track queries that run along with the data that’s collected.
Tracking queries and data, says Alation CTO Venky Ganti, will provide critical context that’s required for addressing the needs of data stewards and customers, including answering questions like “Where can I find data to answer my question?” “Can I trust this data?” “What are the data semantics in order to use it?” and “Who can answer my question about this data set.”
“Experts who understand certain datasets often play the stewardship role of ensuring that data consumers can make accurate and effective use of data,” Ganti says in a blog post. “More recently, data governance initiatives have started to assign formal stewardship responsibility.”
Other companies offering data cataloging functionality include Podium Data, which announced a $.9.5-billion Series A round just prior to the show. Zaloni also unveiled its Bedrock Data Lake Manager (DLM) product, which uses data cataloging to help manage storage more effectively. At Strata, it launched a new version of Mica, its data preparation tool, which introduces a new “shopping cart”-like experience.
That “shopping cart” metaphor was heard often on the Strata expo floor during discussions of data catalogs and big data management. You can expect to see that show up in MDM and data quality tools more often.
Informatica, the big dog of last-gen ETL tools that’s hungering for a piece of the big data pie, also updated its data lake management product, called Data Lake Management, to include more capabilities. Specifically, the product combines data cataloging, stream data capture, Hadoop job management, security, and cloud connectors in a single unified product.
The lack of a centralized data lake management point eats up analysts’ time and hurts productivity, says Amit Walia, executive vice president and chief product officer for Informatica. “Ease of use and a delightful user experience along with robust governance and metadata capabilities are critical for getting business value out of data lakes,” he says in a statement.”
According to Gartner analysts Guido De Simoni and Roxane Edjlali, enterprise metadata management, including data cataloging, has become a “required discipline.” “Failure to recognize this will lead to sustained siloed behavior and loss of business value,” they wrote earlier this year.
While data silos will inevitably be with us for a while, we don’t have to behave as if the data is trapped in a single location. As the Gartner analysts rightly point out, organizations that can get a unified view of their data will find greater business value. It’s becoming clear that data catalogs will be one way of providing that visibility.
May 24, 2017
- IBM, Google and Lyft Announce New Open Source Cloud Project
- Infosys Launches Boundaryless Data Lake on AWS
- Qlik Appoints New Regional Director for Eastern Europe
- Talend Announces Support for Cloudera Altus
- Cloudera Launches Altus to Simplify Big Data Workloads in the Cloud
- Dataguise Answers Enterprise GDPR Compliance Challenges with Complete Sensitive Data Governance
- Trifacta Reveals Spring ’17 Release
- IBM Accelerates Open Database-as-a-Service on IBM Power Systems
- Breakthrough for Large-Scale Computing: ‘Memory Disaggregation’ Made Practical
May 23, 2017
- Pepperdata Code Analyzer for Apache Spark Highlights Bottlenecks for Developers
- MapR Announces New Quick Start Solution
- MariaDB Announces TX 2.0 for Enterprise
- Attunity Wins Bronze in the 12th Annual 2017 IT World Awards
- IBM and the tranSMART Foundation Bring Translational Medicine Data to Scientists
- Life Sciences and Medical Research Turn to Qumulo for Scale-Out Storage
May 22, 2017
- Bright Computing Announces Integration with BeeGFS from ThinkParQ
- Kim Hammonds and Mike Stankey Join Cloudera Board of Directors
- Kodiak Data Launches MemCloud to Address Big Data Infrastructure Chasm
- Hortonworks Appoints Frank Mong as Chief Marketing Officer
- Talend Big Data Business Delivers 100% Year-over-Year Growth
Most Read Features
- Machine Learning, Deep Learning, and AI: What’s the Difference?
- 9 Must-Have Skills to Land Top Big Data Jobs in 2015
- Kafka ‘Massively Simplifies’ Data Infrastructure, Report Says
- Hadoop Has Failed Us, Tech Experts Say
- How Uber Uses Spark and Hadoop to Optimize Customer Experience
- Which Type of SSD is Best: SATA, SAS, or PCIe?
- New AI Chips to Give GPUs a Run for Deep Learning Money
- Spark Streaming: What Is It and Who’s Using It?
- How ‘Purple Rain’ Bolsters Security Intelligence for Capital One
- Speed: The No. 1 Reason to Adopt Graph Tech
- More Features…
Most Read News In Brief
- Why Gartner Dropped Big Data Off the Hype Curve
- Infor Buys Cloud Analytics Vendor Birst
- Microsoft Surges in Gartner Quadrant with Power BI
- Big Data Begets Big Storage
- Startup Patents ‘Whole Brain’ AI Approach
- Six Big Name Schools with Big Data Programs
- Data Deals Heat Up
- Demand, Salaries Grow for Data Scientists
- GOAI Publishes Python Data Frame for GPU Analytics
- ‘Cloud TPU’ Bolsters Google’s ‘AI-First’ Strategy
- More News In Brief…
Most Read This Just In
- Fuzzy Logix, Kinetica to Provide Enhanced GPU-Accelerated In-Database Analytics
- NorCom Selects MapR to Accelerate Deep Learning in Autonomous Driving Applications
- NVIDIA & H2O.ai Announce Major Partnership News
- MarkLogic Launches Version 9 of Database Platform
- MariaDB Names Jon Bakke as Chief Revenue Officer
- Gartner Names Arcadia Data Cool Vendor in IoT Analytics
- Talend Reports First Quarter 2017 Financial Results
- TIBCO Software to Acquire Data Science Platform Leader Statistica
- Apache Software Foundation Announces Beam v2.0.0
- MapR Receives Back-to-Back Honors from CRN
- More This Just In…
June 5 - June 7San Francisco CA United States
June 5 - June 6San Francisco CA United States
June 6 - June 8New York United States
July 18 - July 20Las Vegas NV United States
September 18 - September 19Germany