Data professionals with plans to build lakehouses atop the Apache Iceberg table format have two new Iceberg services to choose from, including one from Tabular, the company founded by Iceberg’s co-creator, and another from Dremio, the query engine developer that is holding its Subsurface 2023 conference this week.
Apache Iceberg has emerged as one of the core technologies upon which to build a data lakehouse, in which the scalability and flexibity of data lakes is merged with the data governance, predictability, and proper SQL behavior associated with traditional data warehouses.
Originally created by engineers at Netflix and Apple to deal with data consistency issues in Hadoop clusters, among other problems, Iceberg is emerging as a defacto data storage standard for open data lakehouses that work with all analytics engines, including open source offerings like Trino, Presto, Dremio, Spark, and Flink, as well as commercial offerings from Snowflake, Starburst, Google Cloud, and AWS.
Ryan Blue, who co-created Iceberg while at Netflix, founded Tabular in 2021 to build a cloud storage service around the Iceberg core. Tabular has been in a private beta for a while now, but today the company announced that it is now open for business with its Iceberg service.
According to Blue, the new Tabular service basically works as a universal table store running in AWS. “It manages Iceberg tables in a customer’s S3 bucket and allows you to connect up any of the compute engines that you want to use with that data,” he says. “It comes with the catalog you need to track what tables and metadata are there, and it comes with integrated RBAC security and access controls.”
In addition to bulk and streaming data load options, Tabular provides automated management tasks for maintaining the lakehouse going forward, including compaction. According to Blue, Tabular’s compaction routines can shrink the size of customers’ Parquet files by up to 50%.
“Iceberg was the foundation for all of this and now we’re just building on top of that foundation,” says Blue, a Datanami 2022 Person to Watch. “It’s a matter of being able to detect that someone wrote 1,000 small files and clean them up for them if they’re using our compaction service, rather than relying on people, data engineers in particular, who are expected to not write a thousand small files into a table, or not write pipelines that are wasteful.”
Tabular built its own metastore, sometimes called a catalog, which is necessary for tracking the metadata used by the various underlying compute engines. Tabular’s metastore is based on a distributed database engine, and scales better than the Apache Hive metastore, Blue says. “We’re also targeting a lot better features than what’s provided by the Hive metastore or wire-compatible Hive metastores like Glue,” he says.
Tabular’s service will also protect against the ramifications of accidentally dropping a table from the lakehouse. “It’s really easy to be in the wrong database, to drop a table, and then realize, uh oh, I’m going to break a production pipeline with what I just did!” Blue says. “How do I quickly go and restore that? Well, there is no way in Hive metastore to quickly restore a table that you’ve dropped . What we’ve done is we’ve built a way to just keep track of dropped tables and clean then up… That way, you can go and undrop a table.”
Blue, who spoke today during Dremio’s Subsurface event and timed the launch of Tabular to the event, describes Tabular as the bottom half of a data warehouse. Users get to decide for themselves what analytical engine or engines they use to populate the upper half of the warehouse, or lakehouse.
“We’re purposefully going after the storage side of the data warehouse rather than the compute side, because there’s a lot of great compute engines out there. There’s Trino, Snowflake, Spark, Dremio, Cloudera’s suite of tools. There’s a lot of things that are good at various pieces of this. We want all of those to be able to interoperate with one central repository of tables that make up your analytical data sets. We don’t want to provide any one of those. And we actually think it’s important that we separate the compute from the storage at the vendor level.”
Users can get started with the Tabular service for free, and are free to use it until the 1TB limit is hit. Blue says that should give testers enough time to familiarize themselves with the service, see how it works with their data, and “fall in love” with the product. “Up to 1TB we’re managing for free,” he says. “Once you get there we have base, professional, and enterprise plans.”
Tabular is available only on AWS today. For more information see www.tabular.io and Blue’s blog post from today.
Dremio Discusses Arctic
Meanwhile, Dremio is also embracing Iceberg as a core component of its data stack, and today during the first day of its Subsurface 2023 conference, it discussed a new Iceberg-based offering dubbed Dremio Arctic.
Arctic is a data storage offering from Dremio that’s built atop Iceberg and available on AWS. The offering brings its own metadata catalog that can work with an array of analytic engines, including Dremio, Spark, and Presto, among others, along with automated routines for cleaning up, or “vacuuming” Iceberg tables.
Arctic also provides fine-grained access control and data governance, according to Tomer Shiran, Dremio’s founder and chief product officer.
“You can see exactly who changed what, in what table and when, down to the level of what SQL command has changed this table in the last week,” Shiran says, “or was there a Spark job and what is the ID that changed the data. and you can see all the history of every single table in the system.”
Arctic also enables another feature that Dremio calls “data as code.” Just as Git is used to manage source code for computer programs and enable users to easily roll back to previous versions, Iceberg (via Arctic) can enable data professionals to work more easily with data.
Shiran says he’s very excited about the potential for data as code within Arctic. He says there are a variety of obvious use cases for treating data as code, including ensuring the quality of ETL pipelines by using “branching;” enabling experimentation by data scientists and analysts; delivering reproducibility for data science models; recovering from mistakes; and troubleshooting.
“At Dremio, in terms of our product and technology, we’ve worked very hard to make Apache Iceberg easy,” Shiran says. “You don’t really need to understand any of the technology.”
Subsurface 2023 continues on Thursday, March 2. Registration is free at www.dremio.com/subsurface/live/winter2023.
Related Items:
Open Table Formats Square Off in Lakehouse Data Smackdown
Snowflake, AWS Warm Up to Apache Iceberg
Apache Iceberg: The Hub of an Emerging Data Service Ecosystem?
October 3, 2024
- Microsoft Launches Azure VMs Optimized for AI Supercomputing, the ND H200 V5 Series
- GIGABYTE Announces General Availability of AmpereOne Servers for Cloud-Native Workloads
- Yugabyte Announces 6th Annual Distributed SQL Summit, Unveils Keynote and Lineup for Nov. Hybrid Event
- Cirrascale Powers AI and HPC Advancements with NVIDIA HGX H200 Server Integration
- Normalyze: New Research Highlights at Least 1/4 of Businesses Don’t Know Where Their Sensitive Data Is
- Algolia Launches Next-Gen Crawler: Simplifying Data Ingestion for Developers
October 2, 2024
- MongoDB Announces General Availability of MongoDB 8.0
- Jasper Research Releases 3 New Models to Improve Image Output for Marketers
- MicroStrategy Unveils Latest AI-Powered Features in MicroStrategy ONE
- Cerabyte to Present on AI-Driven Paradigm Shifts in Data Storage at Yotta 2024
- DataPelago Unveils World’s First Universal Data Processing Engine
- Moveworks Launches Agentic Automation: A First-of-its-Kind Solution for Building AI Agents
- VAST Data Partners with Industry Leaders for Cosmos AI Ecosystem Launch
October 1, 2024
- MOSTLY AI Launches Synthetic Text to Overcome AI Training Plateau and Unlock High-Value Proprietary Data
- Exabeam Launches New AI-Powered LogRhythm Intelligence for Enhanced Threat Detection
- NetApp Expands Collaboration with Google Cloud to Provide Data Storage for Distributed Cloud Infrastructure
- IBM Expands AI and HPC Offerings with NVIDIA H100 GPUs on IBM Cloud
- Anaconda Brings GenAI Models to Desktops with Launch of AI Navigator
- SQream Launches Snowflake Connector, Doubling Data Processing Speed
- Perforce Announces Hadoop Service Bundle for Open Source Big Data Management