
A New Benchmark for Big Data
Database expert Chaitanya Baru keeps one foot in the world of high performance computing via his role at the San Diego Supercomputer Center (SDSC), and another firmly planted on enterprise big data soil.
The data-intensive systems researcher has garnered a fair amount of attention lately with his plan to build a new stable benchmark for big data—one that pulls elements from both of those worlds he inhabits.
Baru and team’s stated mission with their BigData Top 100 project is to “provide academia with a way to evaluate new techniques for big data in a realistic setting; industry with a tool to drive development; and customers with a standard way to make informed decisions about big data systems.”
However, without identifying the variables and how to reflect how dynamic and frequently-changing they are, the team would simply be creating another static benchmark, useful only in certain settings. This is the foundation upon which the benchmark’s value rests, Baru told us in a conversation last week.
He explained that the project will be iterative in nature, with the first benchmark being the basis for the next, and so on, until a constantly-shifting benchmark is created that maintains standardization in the wake of change. Baru says this will be an open benchmark-development process, based input from the steering committee, which is a balance of industry and academic perspectives.
The BigData Top 100 effort will culminate in an end-to-end application-layer benchmark for measuring the performance of big data applications with recognition that the benchmark itself must evolve to meet the needs of ever-changing applications. The final result of this benchmark-building effort will emerge this year following critical input from the vendor, academic and user communities that have tied themselves to the project to decide how to capture the evolving elements and snap them into the benchmark itself.
Baru says that any new big data benchmark should factor in the addition of new feature sets, large data sizes, large-scale and evolving system configurations, shifting loads, and heterogeneous technologies of big data platforms.
According to the team, the following are critical elements of an ideal big data benchmark:
• Simplicity: Following the dictum that “Everything should be made as simple as possible, but no simpler,” the benchmark should be technically simple to implement and execute. This is challenging, given the tendency of any software project to overload the specification and functionality, often straying from the most critical and relevant aspects. |
|
• Ease of benchmarking: The costs of benchmark implementation/execution and any audits should be kept relatively low. The benefits of executing the benchmark should justify its expense—a criterion that is often underestimated during benchmark design. |
|
• Time to market: Benchmark versions should be released in a timely fashion in order to keep pace with the rapid market changes in the big data area. A development time of 3 to 4 years, common for industry consortia, would be unacceptable in the big data application space. The benchmark would be outdated and obsolete before it is released! |
|
• Verifiability of results: Verification of results is important, but the verification process must not be prohibitively expensive. Thus, to ensure correctness of results while also attempting to control audit costs, the BigData Top100 List will provide for automatic verification procedures along with a peer-review process via a benchmark steering committee to ensure verifiability of results. |
At the core, says Baru, is the non-static, concurrent development of the benchmark that differentiates it from the slew of other static application or hardware benchmarks out there. During our chat last week, he noted that while established communities like HPC have had decades to work out its own primary measurements (FLOPS, for example), big data benchmark efforts to date have been scattered, for specific application areas and based on consistent factors.
For instance, the Graph 500 has become a popular benchmark for graph problems, but the results of the algorithmic test would be meaningless when looking at a key value-based problem’s performance. The same is true of the Terasort benchmark, which only applies to a specific subset of real-world applications. On that note, the standard measurements for distributed systems, such as measuring high performance computing installations, are themselves not a fit for the new data-intensive world. Baru said even for a traditional supercomputer center like SDSC, the real concerns are around the massive wells of data from scientific applications, including large simulations.
Further, being able to factor in price for performance is a critical element, he argued, noting that some of the more successful big data-oriented benchmarks factor this in but are too narrow in their scope “While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems,” he claims.
The vendor angle is important here since there is something in it for them—namely a standard to compare performance in the context of price. With the help of a team comprised of other researchers from Greenplum, Oracle, IBM, Cisco and others, Baru hopes to bring a new eye to performance/price metrics for large-scale data projects. The goal is to create a new standard for “big data” vendors to begin comparing their wares along a benchmark that is tailored to reflect real data-intensive workloads.
It should be stressed that while there is vendor support behind the initiative, this is still a very academically-rooted effort—and one that takes its cues from some of the highest-performing systems on the planet. It began as an NSF-funded workshop at the Center for Large-Scale Data Systems Research and the San Diego Supercomputer Center.
Baru is a key figure at the supercomputing center—he’s also been working in the field of “big data” before it ever had the mainstream moniker, both in his work at with the Data Intensive Computing Environments (DICE) Group and at his earlier post in the early-mid 90s as a lead behind large-scale database research. During his tenure there he was one of three IBMers who led the design and development of the DB2 Parallel Edition, which hit the market in 1995 and shook up the database space. As head of large-scale data-intensive efforts underway now at SDSC, he is seeing the triple-v problems of big data firsthand at an extreme scale.
While the problems of supercomputing centers’ scientific simulations and applications might sound rather removed from the real-world concerns of enterprise shops struggling to keep up with their needs for big, fast data handling, there are some useful lessons Baru is bringing over from the big box world of supercomputing—at least in concept.
“We think that data is the real context for the FLOPS,” said Baru, noting that the traditional method of looking at the power of a system was to measure its Floating Operations Per Second (hence the acronym). While he says that hardware optimizations are still critical, the software stack needs to continue to evolve to meet the increasingly diverse needs of the scientific community.
Related Articles
DDN Captures Hadoop with HPC Hook
Juneja: HPC, Cloud, and Open Source the Nexus of Big Data Innovation
May 23, 2022
- Informatica World 2022 Showcases Adoption of the Intelligent Data Management Cloud
- Census Invites Data Professionals to Celebrate the Summer of Data ’22
May 20, 2022
- Elastic Announces Expanded Collaboration With AWS
- IBM Enhances Global Data Platform to Address AI Adoption Challenges
May 19, 2022
- VAST Data Announces Newest Feature Releases
- Franz’s AllegroGraph 7.3 Extends GraphQL to Knowledge Graph Developers
- Tamr Introduces Tamr Enrich to Simplify and Improve the Data Mastering Process
- Yugabyte Partners With Banking Software Firm Temenos
- New Relic Expands Instant Observability Ecosystem
- Confluent Report: Real-Time Data Streams Boost Revenue and Customer Satisfaction
- Alteryx Announces New Cloud Capabilities
- Komprise Automates Unstructured Data Discovery with Smart Data Workflows
May 18, 2022
- Sylabs Readies for Native OCI Compatibility with Release of SingularityCE 3.10
- Qlik Announces 2022 Global Transformation Awards
- Ahana Announces New Presto Query Analyzer to Bring Instant Insights into Presto Clusters
- Imply Announces Dates and Details for Druid Summit On The Road
- TileDB Secures investment From Verizon Ventures
- New Relic Introduces Low-Overhead Kubernetes Monitoring
- Inspur’s AIStation Passes the CNCF Certified Kubernetes Conformance Program
- Neo4j ICIJ Announce the 2022 Connected Data Fellowship
Most Read Features
- Five Ways Big Data Projects Can Go Wrong (And What You Can Do About Them)
- The Future of Data Management: It’s Already Here
- Payment Fraud at Record Lows Thanks to Analytics and AI, Visa Says
- d-Matrix Gets Funding to Build SRAM ‘Chiplets’ for AI Inference
- Google’s Massive New Language Model Can Explain Jokes
- Five Emerging Trends in Enterprise Data Management
- All Eyes on Snowflake and Databricks in 2022
- How to Stop Failing at Data
- Will the Data Lakehouse Lead to Warehouse-Style Lock-In?
- AI That Works on Behalf of Workers
- More Features…
Most Read News In Brief
- Anaconda Unveils PyScript, the ‘Minecraft for Software Development’
- Looker Founder Helps Create New Data Exploration Language, Malloy
- Why So Few Are Mastering the Data Economy
- Google Cloud Launches New Postgres-Compatible Database, AlloyDB
- Google Debuts LaMDA 2 Conversational AI System and AI Test Kitchen
- Anaconda’s Commercial Fee Is Paying Off, CEO Says
- SalesForce Taps LLM for Programming Boost with CodeGen
- Big Data Career Notes: May 2022 Edition
- Qumulo Giving Away 1PB of Free Cloud Storage
- The Six New Rules of Data
- More News In Brief…
Most Read This Just In
- CData Software and HULFT Announce Interoperability Partnership to Break Down Data Silos
- Splunk Extends its Data-to-Everything Platform with Cloud and Machine Learning Advancements
- MariaDB Puts $25K on the Table in Distributed SQL Throwdown
- MariaDB Survey Reveals COVID-19’s Impact on Cloud Adoption
- Apple Makes Mobility Data Available to Aid COVID-19 Efforts
- Tableau Announces New Capabilities to Empower Developers
- IBM Enhances Global Data Platform to Address AI Adoption Challenges
- DataRobot to Host Inaugural AI Experience Worldwide Conference
- Harnham Data Reveals Increase in Diversity Across Data and Analytics Industry, but Pay Gaps Continue
- Grid Dynamics Unveils New ML-Based Price Optimization Starter Kit for Google Cloud Vertex AI
- More This Just In…
Sponsored Partner Content
-
Everyday AI, Extraordinary People
-
Dataiku Makes the Use of Data and AI an Everyday Behavior
-
Data Fabrics as the best path for Enterprise Data Integration
-
Dataiku connects data and doers through Everyday AI
-
Leaving Legacy ETL Behind
-
Streamline Lakehouse Analytics with Matillion and Databricks SQL
-
Close the Information Gap: How to Succeed at Analytics in the Cloud
-
Who wins the hybrid cloud?
Sponsored Whitepapers
Contributors
Featured Events
-
ISC 2022: The Premier Forum for HPC
May 29 - June 2 -
DMWF Global
June 23 - June 24London United Kingdom -
CDAO Government
September 13 @ 1:00 pm - September 14 @ 5:00 pm -
CDAO Fall
October 10 - October 12Boston MA United States