Follow Datanami:
April 27, 2015

The ASF Announces Apache Parquet as a Top-Level Project

FOREST HILL, Md., April 27 — The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache Parquet has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project’s community and products have been well-governed under the ASF’s meritocratic process and principles.

“The incubation process at Apache has been fantastic and really the last step of making Parquet a community driven standard fully integrated within the greater Hadoop ecosystem,” said Julien Le Dem, Vice President of Apache Parquet.

Apache Parquet is an Open Source columnar storage format for the Apache Hadoop ecosystem, built to work across programming languages and much more:

  • processing frameworks (MapReduce, Apache Spark, Scalding, Cascading, Crunch, Kite)
  • data models (Apache Avro, Apache Thrift, Protocol Buffers, POJOs)
  • query engines (Apache Hive, Impala, HAWQ, Apache Drill, Apache Tajo, Apache Pig, Presto, Apache Spark SQL)

“At Twitter, Parquet has helped us scale our big data usage by in some cases reducing storage requirements by one third on large datasets as well as scan and deserialization time. This translated into hardware savings as well as reduced latency for accessing the data. Furthermore, Parquet being integrated with so many tools creates opportunities and flexibility regarding query engines,” said Chris Aniszczyk, Head of Open Source at Twitter. “Finally, it’s just fantastic to see it graduate to a top-level project and we look forward to further collaborating with the Apache Parquet community to continually improve performance.”

“Parquet’s integration with other object models, like Avro and Thrift, has been a key feature for our customers,” said Ryan Blue, Software Engineer at Cloudera. “They can take advantage of columnar storage without changing the classes they already use in their production applications.”

“At Netflix, Parquet is the primary storage format for data warehousing. More than 7 petabytes of our 10+ Petabyte warehouse is Parquet formatted data that we query across a wide range of tools including Apache Hive, Apache Pig, Apache Spark, PigPen, Presto, and native MapReduce. The performance benefit of columnar projection and statistics is a game changer for our big data platform,” said Daniel Weeks, Software Engineer at Netflix. “We look forward to working with the Apache community to advance the state of big data storage with Parquet and are excited to see the project graduate to full Apache status.”

“I was extremely happy to see Parquet arrive as an Incubator project,” said Chris Mattmann, Apache Parquet Incubator Mentor, and Chief Architect, Instrument and Science Data Systems Section at NASA Jet Propulsion Laboratory. “After talking with some in its community there was a real match with this columnar data format technology and its community with the way that we do things here at the ASF. Parquet has had an exemplar Incubation, and the project has big things ahead of it. I am encouraging my Data Science Team at NASA to evaluate it for data representation especially as it relates to our science holdings in Earth, planetary and space sciences, and astrophysics.”

Catch Apache Parquet in action at the Hadoop Summit, 9-11 June 2015 in San Jose, California. The Apache Parquet project welcomes contributions and community participation through mailing lists, face-to-face MeetUps, and user events. For more information, visit http://parquet.apache.org/community/.

Datanami