Sears Rides Hadoop Up Retail Mountain
Falling behind Walmart and Target in retail store sales, Sears hopes to rebound by investing fairly heavily in Hadoop. Sears revenue had decreased from $50 billion in 2008 to $42 billion last year. However, smarter marketing as a result of being able to keep all their data and target customers individually has resulted in sizable growth over the last year, with sales over this year’s quarter ending on July 28 up 163% from the same quarter in 2011.
“With Hadoop we can keep everything, which is crucial because we don’t want to archive or delete meaningful data,” said Sears Chief Technology Officer Phil Shelley. Sears has seen their big data processing, especially with regard to evaluating marketing campaigns, quicken as a result of moving their data from Teradata and SAS onto Hadoop. According to Shelley, what took six weeks’ time now happens within a week on Hadoop. Their current 300-node cluster which contains 2PB allows the company to keep 100% of their data instead of a meager 10% according to Shelley.
Sears’s view and strategy regarding big data is an interesting one. Along with being Sears’s CTO and Executive VP, Shelley runs MetaScale, a Sears subsidiary whose goal it is to move into providing Hadoop services for other companies similar to Amazon and their Amazon Web Services.
It would seem that, in an effort to compete with Amazon, they would have a little catching up to do. On the other hand, Sears’s big data efforts currently surpass that of Walmart’s, who just started running ten Hadoop test nodes for experimental e-commerce analysis. Sears did that in 2010.
It is unfair to compare Sears, which makes it money historically from their physical stores, to online stores like Amazon. It may not even be fair to compare them to Target and Walmart, as Sears has more of an appliance focus while Target and Walmart are more general.
With that being said, Sears, and specifically MetaScale, wants to exist in the big data market. They have some interesting viewpoints regarding that. For example, Shelley sees little value in the modern era for ETL.
“ETL is an antiquated technique, and for large companies it’s inefficient and wasteful because you create multiple copies of data,” Shelley says. .“Everybody used ETL because they couldn’t put everything in one place, but that has changed with Hadoop, and now we copy data, as a matter of principle, only when we absolutely have to copy.”
Shelley’s principles are sound and may have led to the drastic reported reduction, $500,000, in their mainframe costs per year. Some, like Cloudera CEO Mike Olson, warn against the complete departure from ETL. But to Shelley the move is intuitive. “If in three years you come up with a new query or analysis, it doesn’t matter because there’s no schema,” Shelley says. “You just go get the raw data and transform it into any format you need.”
Related Articles
Cloudera CTO Reflects on Hadoop Underpinnings
Six Super-Scale Hadoop Deployments
March 28, 2024
- Elastic Announces 2023 Elastic Excellence Awards Winners
- Woolpert Acquires Ireland-Based Murphy Geospatial, a Leading European Geospatial Solutions Firm
- WiDS Livermore Conference Attendees Network, Share Research and Absorb Wisdom
- Observe Announces $115M In Series B Financing
- Lightning AI’s New Thunder Compiler Boosts AI Development Efficiency by 40%
- Intel Gaudi 2 Remains Only Benchmarked Alternative to NV H100 for GenAI Performance
- Appen Launches Solution for Enterprises to Customize LLMs
- MineOS Unveils AI Asset Discovery
- Cloudera Survey Reveals 90% of IT Leaders Believe that Unifying the Data Lifecycle on a Single Platform is Critical for Analytics and AI
- Snowflake Enhances Secure, Cross-Cloud Collaboration for High Value Business Outcomes with Snowflake Data Clean Rooms
- Domo Announces Winners of the 2024 Community Ovation Awards
March 27, 2024
- New MLPerf Inference Benchmark Results Highlight the Rapid Growth of Generative AI Models
- Qlik Advances Real-time Data Analytics with Solace PubSub+ Platform Integration
- Samsung Unveils Expanded CXL Memory Module Portfolio at Memcon 2024, Enhancing AI and HPC
- Celestial AI Closes $175M Series C Funding Round Led by US Innovative Technology Fund
- Databricks Launches DBRX: A New Standard for Efficient Open Source Models
- Astronomer Unveils New Capabilities in Astro to Streamline Enterprise Data Orchestration
- DataVisor Introduces Enhanced Anti-Money Laundering Solution to Support Financial Institutions
March 26, 2024
Most Read Features
Sorry. No data so far.
Most Read News In Brief
Sorry. No data so far.
Most Read This Just In
Sorry. No data so far.
Sponsored Partner Content
-
Supercharge Your Data Lake with Spark 3.3
-
Learn How to Build a Custom Chatbot Using a RAG Workflow in Minutes [Hands-on Demo]
-
Overcome ETL Bottlenecks with Metadata-driven Integration for the AI Era [Free Guide]
-
Gartner® Hype Cycle™ for Analytics and Business Intelligence 2023
-
The Art of Mastering Data Quality for AI and Analytics
Sponsored Whitepapers
Contributors
Featured Events
-
Data Universe
April 10 - April 11New York United States -
Call & Contact Center Expo
April 24 - April 25Las Vegas NV United States -
AI & Big Data Expo North America 2024
June 5 - June 6Santa Clara CA United States -
AI Hardware & Edge AI Summit 2024
September 10 - September 12San Jose CA United States