
Fully Hydrate Your Lake in 8 Weeks or Less, Zaloni Says

Zaloni today rolled out Data Lake in a Box, a soup-to-nuts offering for getting a fully governed Hadoop cluster up and running in eight weeks or less. The offering includes Hadoop software, data management middleware, and implementation services. “Everything but the hardware,” Zaloni’s VP of marketing says.
While Hadoop clusters are powerful data storage and processing machines, they’re not easy to implement or manage. There are many configurations settings that require skill and experience to get right. And once the cluster is configured, getting the data ingested in a way that it can actually be worked with is not a trivial matter.
It’s not uncommon to hear about six-month Hadoop deployments. In these situations, much of the time spent is spent building and implementing data management processes that ensure the data is governed, discoverable, and accessible to the end-users who will (eventually) be allowed access into the cluster, or at least a part of it.
Zaloni is hoping to shortcut these extended deployments by bringing together all the software and services necessary to get a general-purpose and governed Hadoop cluster up and running in about two months.
“We’re helping companies get fully hydrated in under eight weeks,” says Zaloni vice president Kelly Schupp. “We’re reducing the time and effort it takes by up to 75%, and at the same time we’re providing the kind of visibility and governance support they’re going to need, because, as that data is getting ingested, it’s being tagged and cataloged.”
Data Lake in a Box combines Bedrock, its data lake management offering, and Mica, its self-service user access offering, with its Ingestion Factory software and users choice of Hadoop distribution, including plain vanilla Apache Hadoop or, for an extra fee, the Hadoop distributions from Cloudera or MapR.
It’s all about quickly creating a fully governed Hadoop cluster that will serve the needs of the business for many years, says Tony Fisher, Zaloni’s senior VP of strategy and business development.
While eight weeks is a big improvement over six months, it’s still not as quick as some offerings that promise to create ready-to-use Hadoop clusters in a matter of days. The key difference there is quality, says Tony Fisher, Zaloni’s senior VP of strategy and business development.
“There’s a big difference between creating a data lake and a data swamp,” Fisher says “You can ingest anything into a data lake in three days. But the fact of the matter is it doesn’t’ have the data quality, the rigor, or the types of things you’re going to need to do productive analytics on it.”
The offering doesn’t include analytics; it’s up to the user to bring those. That’s fine because most customers these days are developing their own analytics in Python or R using data science notebooks, or hooking Excel, Tableau, or Qlik BI tools to visualize and manipulate data.
Companies that adopt Hadoop are finding that it takes more time and effort than they expected to get good results out of Hadoop, says Nik Rouda, an analyst with Enterprise Strategy Group.
“Operationalizing data lakes has proven much harder and taken much longer than most enterprises would want,” he states in Zaloni’s press release. “This process typically involves manually cobbling together a large number of disparate tools, and then trying to support that mess going forwards. Zaloni integrates all the essential capabilities and best practices and packages them up, delivering quality and productivity right out of the box.”
Zaloni says it’s getting traction with Bedrock and Mica, which come together in a single offering for the first time with the new Data Lake in a Box offering. The company says bookings and revenues grew by 3x from 2015 to 2016, and it’s hoping the new offering continues that momentum.
One of the Durham, North Carolina company’s customers, Emirates Integrated Telecommunications Company (also known simply as du), will be in San Jose, California this week to present at the Strata + Hadoop World show. The company will discuss its experience with Zaloni’s products. Other prominent Zaloni customers include SCL Health, CDS Global, and Pechanga Resort and Casino.
Related Items:
Dr. Elephant Steps Up to Cure Hadoop Cluster Pains
IBM Taps Zaloni to Ride Herd on Hadoop
March 3, 2021
March 2, 2021
- Datanami Unveils 2021 People to Watch
- DataRobot Opens Applications for Second AI for Good: Powered by DataRobot Cohort
- Spectra Logic and OpenDrives Partner to Provide End-to-End Data Storage, Management
- Deloitte Teams with NVIDIA to Launch the Deloitte Center for AI Computing
- Kaskada Announces General Availability of its Feature Engineering Platform
- Open Source Data Integration Company Airbyte Secures $5.2 Million in Seed Round
- CEMEX Transforms Stockpile Management with Kespry AI-powered Aerial Intelligence Platform
- PGA TOUR Selects AWS as its Official Cloud Provider
- ScaleOut Software Joins Digital Twin Consortium to Share Streaming Analytics Expertise
- OMRF Selects WekaIO Data Platform to Accelerate Drug Discovery
- Cambridge Quantum Announces Largest Ever NLP Implementation on Quantum Computer
March 1, 2021
- Sub-diffraction Optical Writing Information Bits: Towards a High-Capacity Optical Disk for Big Data
- IBM Cloud Satellite Enables Clients to Deliver Cloud Securely in Any Environment
- UNESCO and World Economic Forum Hosting ‘Breaking Through Bias in AI’ on Mar. 8
- Datanami Marks 10 Years of Coverage with 10-Month Retrospective
- Cloudera Cloud-Native Operational Database Accelerates Application Development
- TripleBlind Secures $8.2M Seed Funding, Support for Data Privacy Approach
- NetApp Joins Aston Martin Cognizant Formula One for Data-Driven Racing
February 26, 2021
Most Read Features
- He Couldn’t Beat Teradata. Now He’s Its CEO
- Big Data File Formats Demystified
- Who’s Winning the Cloud Database War
- Why Data Science Is Still a Top Job
- Apache Iceberg: The Hub of an Emerging Data Service Ecosystem?
- Snowflake: Not What You May Think It Is
- Big Data Predictions: What 2020 Will Bring
- Empowering the Data Consumer: Living, and Breathing Data Governance, Security, and Regulations
- Understanding Your Options for Stream Processing Frameworks
- Is Python Strangling R to Death?
- More Features…
Most Read News In Brief
- Databricks Edges Closer to IPO with $1B Round
- Researchers Use Deep Learning to Plow Through NASA Snow Radar Data
- Databricks Plotting IPO in 2021, Bloomberg Reports
- The AI Inside NASA’s Latest Mars Rover, Perseverance
- Soda Launches Open Data Monitoring
- Data Prep Still Dominates Data Scientists’ Time, Survey Finds
- Databricks Now on Google Cloud
- The Rise and Fall of Qlik
- Esri Boosts ‘Velocity” of ArcGIS for IoT
- Update: Elastic Shifts Licensing Terms, Citing Amazon Moves
- More News In Brief…
Most Read This Just In
- Cal Poly Team Working on Cross-disciplinary Data Science and Analytics Effort
- UCL Reports: Online Search Activity Can Help Predict Peaks in COVID-19 Cases
- DataRobot Announces Feature Discovery Integration with Snowflake
- Collibra Acquires Predictive Data Quality Vendor OwlDQ
- Sinequa Announces Strong Momentum and Fiscal Year 2020 Results Amid COVID-19 Pandemic
- SingleStore Strengthens Executive Team with Oliver Schabenberger as Chief Innovation Officer
- Wharton Research Data Services Expands RavenPack Analytics
- Google Cloud Announces BigQuery Features for Performance, Efficiency
- 100% of Customers Recommend Snowflake for Fourth Consecutive Year in Dresner Report
- Alluxio Achieves 3.5x Year-Over-Year Revenue Growth in FY21
- More This Just In…
Sponsored Partner Content
-
The Best Techniques for Data Integration in 2021
-
Onboard data AND coffee!
-
The object store of choice for VMware’s Tanzu initiative
-
Who wins the hybrid cloud?
-
Making the Business Case for a Data Catalog
-
Hear from William McKnight on everything you need to know to modernize your enterprise data warehouse.
-
Free Whitepaper: The Race for a Unified Analytics Warehouse
-
Move beyond extracts – Instantly analyze all your data with Smart OLAP™
-
CDATA | Universal Connectivity to SaaS/Cloud, NoSQL, & Big Data