

Apache Iceberg, the table format that ensures consistency and streamlines data partitioning in demanding analytic environments, is being adopted by two of the biggest data providers in the cloud, Snowflake and AWS. Customers that use big data cloud services from these vendors stand to benefit from the adoption.
Apache Iceberg emerged as an open source project in 2018 to address longstanding concerns in Apache Hive tables surrounding the correctness and consistency of the data. Hive was originally built as a distributed SQL store for Hadoop, but in many cases, companies continue to use Hive as a metastore, even though they have stopped using it as a data warehouse.
Engineers at Netflix and Apple developed Iceberg’s table format to ensure that data stored in Parquet, ORC, and Avro formats isn’t corrupted as it’s accessed by multiple users and multiple frameworks, including Hive, Apache Spark, Dremio, Presto, Flink, and others.
The Java-based Iceberg eliminates the need for developers to build additional constructs in their applications to ensure data consistency in their transactions. Instead, the data just appears as a regular SQL table. Iceberg also delivered more fine-grained data partitioning and better schema evolution, in addition to atomic consistency. The open source project won a Datanami Editor’s Choice Award last year.
At the re:Invent conference in late November, AWS announced a preview of Iceberg working in conjunction with Amazon Athena, its serverless Presto query service. The new offering, dubbed Amazon Athena ACID transactions, uses Iceberg under the covers to guarantee more reliable data being served from Athena.
“Athena ACID transactions enables multiple concurrent users to make reliable, row-level modifications to their Amazon S3 data from Athena’s console, API, and ODBC and JDBC drivers,” AWS says in its blog. “Built on the Apache Iceberg table format, Athena ACID transactions are compatible with other services and engines such as Amazon EMR and Apache Spark that support the Iceberg table format.”
The new service simplifies life for big data users, AWS says.
“Using Athena ACID transactions, you can now make business- and regulatory-driven updates to your data using familiar SQL syntax and without requiring a custom record locking solution,” the cloud giant says. “Responding to a data erasure request is as simple as issuing a SQL DELETE operation. Making manual record corrections can be accomplished via a single UPDATE statement. And with time travel capability, you can recover data that was recently deleted using just a SELECT statement.”
Not to be outdone, Snowflake has also added support for Iceberg. According to a January 21 blog post by James Malone, a senior product manager with Snowflake, support for the open Iceberg table format augments Snowflake’s existing support for querying data that resides in external tables, which it added in 2019.
External tables benefit Snowflake users by allowing them explicitly define the schema before the data is queried, as opposed to determining the data schema as it’s being read from the object store, which is how Snowflake traditionally operates. Knowing the table layout, schema, and metadata ahead of time benefits users by offering faster performance (due to better filtering or patitioning), easier schema evolution, the ability to “time travel” across the table, and ACID compliance, Malone writes.
“Snowflake was designed from the ground up to offer this functionality, so customers can already get these benefits on Snowflake tables today,” Malone continues. “Some customers, though, would prefer an open specification table format that is separable from the processing platform because their data may be in many places outside of Snowflake. Specifically, some customers have data outside of Snowflake because of hard operational constraints, such as regulatory requirements, or slowly changing technical limitations, such as use of tools that work only on files in a blob store. For these customers, projects such as Apache Iceberg can be especially helpful.”
While Snowflake maintains that customers ultimately benefit by using its internal data format, it acknowledges that there are times when the flexibility of an externally defined table will be necessary. Malone says there “isn’t a one-size-fits-all storage pattern or architecture that works for everyone,” and that flexibility should be a “key consideration when evaluating platforms.”
“In our view, Iceberg aligns with our perspectives on open formats and projects, because it provides broader choices and benefits to customers without adding complexity or unintended outcomes,” Malone continues.
Other factors that tipped the balance in favor of Iceberg includes Apache Software Foundation being “well-known” and “transparent,” and not being dependent on a single software vendor. Iceberg has succeeded “based on its own merits,” Malone writes.
“Likewise, Iceberg avoids complexity by not coupling itself to any specific processing framework, query engine, or file format,” he continues. “Therefore, when customers must use an open file format and ask us for advice, our recommendation is to take a look at Apache Iceberg.
“While many table formats claim to be open, we believe Iceberg is more than just ‘open code, it is an open and inclusive project,” Malone writes. “Based on its rapid growth and merits, customers have asked for us to bring Iceberg to our platform. Based on how Iceberg aligns to our goals with choosing open wisely, we think it makes sense to incorporate Iceberg into our platform.”
This embrace of Iceberg by AWS and Snowflake is even more noteworthy considering that both vendors have a complicated history with open source. AWS has been accused of taking free open source software and building profitable services atop to the detriment of the open source community that originally developed them (an accusation leveled by backers of Elasticsearch). In its defense, AWS says it seeks to work with open source communities and to contribute its changes and bug fixes to projects.
Snowflake’s history with open source is even more complex. Last March, Snowflake unfurled an assault on open source in general, calling into question the longstanding assumptions that “open” automatically equals better in the computing community. “We see table pounding demanding open and chest pounding extolling open, often without much reflection on benefits versus downsides for the customers they serve,” the Snowflake founders wrote.
In light of that history, Snowflake’s current embrace of Iceberg is even more remarkable.
Related Items:
Tabular Seeks to Remake Cloud Data Lakes in Iceberg’s Image
Apache Iceberg: The Hub of an Emerging Data Service Ecosystem?
Cloud Backlash Grows as Open Source Gets Less Open
June 30, 2022
June 29, 2022
- Lightbits Raises $42M in Growth Capital
- TigerGraph Launches New Version of TigerGraph Cloud
- Immuta Adds Policy Enforcement to Unity Catalog in the Databricks Lakehouse Platform
- DataStax’s Astra Streaming Goes GA With New Built-in Support for Kafka and RabbitMQ
- Ocient Partners With Carahsoft
- Timecho, Founded by the Creators of Apache IoTDB, Raises Over $10M
- Acceldata to Enhance Data Reliability with Databricks Integration
June 28, 2022
- Micron Delivers 176-Layer NAND SATA SSD for Datacenters
- Databricks Announces Major Contributions to Flagship Open Source Projects
- Sigma Computing Partners with Databricks to Bring No-Code Analytics to the Data Lakehouse
- Opaque Systems Raises $22M Series A To Bring Scalable, Multi-Party Analytics and AI to Confidential Computing
- MinIO Partners With Snowflake to Deliver Multi-Cloud Data Accessibility
- Cloudian Partners with Vertica to Deliver On-prem Data Warehouse Platform on S3 Data Lake
- Kyligence Introduces an Intelligent Metrics Store to Democratize Data Analytics
- Databricks Unveils New Innovations for Its Data Lakehouse Platform
- Fivetran Named Databricks Data Ingestion Partner of the Year
- Datadobi’s StorageMAP Now Integrated with Amazon FSx for NetApp ONTAP
- ThoughtSpot Report Finds Companies That Embed Analytics With a Differentiated UX Increase ROI
- Qumulo Named HPE Global Storage Partner of the Year
Most Read Features
- A/B Test Like You’re Airbnb
- Databricks Opens Up Its Delta Lakehouse at Data + AI Summit
- Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck — Here’s How We Solve It
- Europe’s New AI Act Puts Ethics In the Spotlight
- Snowflake Unveils Native Apps, UniStore, and More Python Support at Summit
- What’s Driving Data Science Hiring in 2019
- A Culture Shift on Data Privacy
- Data Mesh Vs. Data Fabric: Understanding the Differences
- Inside the Modern Data Stack
- Big Data File Formats Demystified
- More Features…
Most Read News In Brief
- Google Debuts LaMDA 2 Conversational AI System and AI Test Kitchen
- EMR Serverless Now Available from AWS
- OpenAI’s DALL·E 2 Is Surreal
- Samsung to Ship Next-Generation Smart SSD This Year
- Airflow Available as a New Managed Service Called Astro
- DataRobot Introduces Expanded AI Cloud Capabilities and Tools
- DataStax Nabs $115 Million to Help Build Real-Time Applications
- Data Quality Study Reveals Business Impacts of Bad Data
- Google Suspends Senior Engineer After He Claims LaMDA is Sentient
- John Snow Labs Releases Spark NLP 4.0
- More News In Brief…
Most Read This Just In
- GigaOm Benchmark Study Names SingleStore Best Database
- Databricks Introduces Data Lineage For Unity Catalog
- Databricks Unveils New Innovations for Its Data Lakehouse Platform
- Snowplow Closes $40M in Series B Funding
- Precisely Launches New Data Integrity Suite
- StreamSets Launches Enterprise-Grade Transformation Engine Built on Snowpark
- Exabeam Partners with Google Cloud
- Teradata Recognized as a Leader in a 2022 IDC MarketScape Report
- Anaconda Acquires PythonAnywhere to Expand Python Team Collaboration in the Cloud
- Scality RING Achieves Milestone Disaster Recovery Access with Major US Bank
- More This Just In…
Sponsored Partner Content
-
Everyday AI, Extraordinary People
-
Dataiku Makes the Use of Data and AI an Everyday Behavior
-
Data Fabrics as the best path for Enterprise Data Integration
-
Dataiku connects data and doers through Everyday AI
-
Leaving Legacy ETL Behind
-
Streamline Lakehouse Analytics with Matillion and Databricks SQL
-
Close the Information Gap: How to Succeed at Analytics in the Cloud
-
Who wins the hybrid cloud?
Sponsored Whitepapers
Contributors
Featured Events
-
CDAO Government
September 13 @ 1:00 pm - September 14 @ 5:00 pm -
CDAO Fall
October 10 - October 12Boston MA United States