Why Big Data and Data Scientists Are Overrated
What does it take to get value out of data? Many organizations assume that you need a big collection of data and a highly skilled data scientist to spin all those 1s and 0s into dollar signs. In reality, companies need neither of those things to be successful with data.
One of the biggest mistakes that organizations can make with their data analytics projects is to assume they need a data scientist at the very beginning. According to Daniel Mintz, chief data evangelist with Looker, organizations are much better off starting lower on the data analytics food chain and working their way up as they gain proficiency.
“I’ve seen cases where people hired a data scientist way before they’re ready,” Mintz tells Datanami. “They don’t actually have any data, and even if they do, it’s dirty and dispersed across a whole bunch of places. The data scientist who doesn’t necessarily understand their business arrives and says ‘Where’s the nicely curated data set that you want me to use to solve problems?’ And they say, ‘Oh we didn’t’ know that was a prerequisite.'”
The fact is, data scientists spend about three-quarters of their time doing data janitorial work – collecting, transforming, and cleaning data – rather than building the complex predictive models that they were actually hired for. That equals frustration for data scientists who had high hopes of making an impact, and sour grapes for the people who hired them.
Organizations should start with the basics, and work up from there. Instead of being lured by the “shiny object” syndrome and thinking you need a big Hadoop data lake or neural networks to solve a problem, seek the simplest answer.
“People make a mistake if they jump right to the most sophisticated tool, because they’re wasting a lot of time,” Mintz says. “The reality is a lot of problems are quite tractable with a simple regression. And some problems don’t even need that. You can just look at the data and see what’s happening.”
Mintz’s personnel advice? Hire data generalists who can do the time-consuming data legwork that’s needs to be done before more highly skilled (and highly paid) data scientists come in to do their highly specialized thing.
“The really key skill is having somebody who can take what is fundamentally a business question and translate that into a data question,” says Mintz, who previously at MoveOn.org and other data-intensive operations. “That’s the key skill. When you’re not big enough to have specialists, the business people, who aren’t data people, will know what the right business questions are.”
Mintz recommends pairing a SQL-loving analyst with an ETL-loving engineer to start helping the business prepare themselves to answer questions with data. As they document their data stores, define organization-specific metrics, and create workflows that transform and combine data in reliable and useful ways, they will start to see how the super powers of the real data scientists could best be used.
“As they scale up,” Mintz says, “they realize ‘Now we have three or five analysts, now we’re ready to add a data scientist, because now we’ve got a handle on what our data means, we know where things live in our schema, we know what problems might be tractable using a more sophisticated algorithms.'”
There are real benefits to be had by analyzing data, but like everything else in life, you must walk before you run. Hiring the right people to make up your data analytics team, and hiring them in the right order, is important.
“Folks are looking for a magical unicorn who can do it all, when in reality it’s a team sport,” Mintz says. “You really need to be thinking about how does the team work together, and how as a team do you cover all the bases. That starts with somebody who’s a utility player who can play all the positions, then you start to specialize.”
Follow the Data Crumb
Just as you don’t start with a data scientist, you shouldn’t start with big data, either. In fact, it’s much better to start with the right piece of data, however small that is.
For Wolf Ruzicka, the chairman of Washington D.C.-based analysis firm EastBanc Technologies, it starts with a single crumb of data.
“Just the other day I ran into a company that has accumulated 50PB of data,” Ruzicka tells Datanami. “That’s great. But when you compare them against competitors…all the metrics — profit margin, revenue, growth, size — really are the same. They were very proud of that 50PB of data lake. But really when I look at it, it must have turned into a data swamp.”
When EastBanc engages a new client, there is a flurry of activity and brain storming meetings as EastBanc analysts do their best to understand the business problem at issue, and the potential data available to solve it.
The company starts small and works quickly. The customer may have more pressing questions they want answered, but starting with the low-hanging fruit on easily explored data is a good way to get started, and provide validation that the analytics are worthy. Setting a hard initial deadline of two to four weeks helps encourage fast iteration.
That first piece of useful data becomes a “data crumb” that typically leads to further success, Ruzicka says. “That’s what we call it,” he says. “One data crumb of relevant data, and we iterate from there.”
When you draw it out on whiteboard, it looks very different than a typical big data architectural drawing. “It’s more of a data tree that you’re starting to groom,” he says. “You may end up with big data. But you don’t start with big data. You essentially turn it upside down.”
This approach is anathema to the current wave of big data thinking, which says one should throw all of one’s data into Hadoop, and hope that magical algorithms can make sense of it down the line. This approach may work, but most likely through sheer luck, Ruzicka says.
Ruzicka’s advice: It’s better to start with a smaller data set that’s more reliable and useful, than to start with a bigger data of unknown value.
“Instead of being this pathological data hoarder, rather be someone who assembles the data and continuously goes through data spring cleaning at very regular intervals,” he says. “Just as bad as it may be not to have any data, it’s just as bad, confusing and expensive to have lots and lots of data and not make any use of it.
“So why not find that middle ground, where you iterate around data breadcrumbs that have correlations with each other, that bring value to each other, and then your purposefully build up that big data base that you may ultimately end up with,” he continues. “Just find something of value and iterate from there, and over time you will answer the unknown unknowns that you were not even aware of in the beginning.”
October 16, 2019
- The Delta Lake Project Turns to Linux Foundation to Become the Open Standard for Data Lakes
- Splunk .conf19 Hits 10th Year with Over 200 Customers to Present at Anniversary of Conference
- Percona Survey: Diverse Tools, Multiple Databases, and MultiCloud and Hybrid Environments are Transforming Datacenters
- MicroStrategy Partners with DataRobot to Set New Standards for Enterprise AI-Driven Insights
- SAS Reseller EIS Wins Spot on $500M DOJ Analytics Blanket Purchase Agreement
- SwiftStack and InfiniteIO Introduce Solution to Ease Enterprise NAS Cost, Performance, and Capacity
- Study: WANdisco’s LiveMigrator/LiveAnalytics Solution Outperforms its Competitors
- Databricks Simplifies Machine Learning Model Management at Scale with MLflow Model Registry
- Elastic Introduces Elastic Endpoint Security
- Alteryx Releases its Data and Digitization Report at Inspire Europe 2019
October 15, 2019
- Survey: Most Analytics Projects are Jeopardized due to Lack of Access to Data
- Interana Opens Behavioral Analytics Query Language with Release of 4.0
- Intel Xeon Scalable Processors Accelerate Big Data Computing in Alibaba Cloud
- SAS and Red Hat Collaborate to Optimize Analytical Capabilities Across the Hybrid Cloud
October 14, 2019
- Woodbury University Launches Computer Science in Data Analytics Program
- Japanese Researcher Wins Award for Applying Data Mining Tech to Analyze Big Data in Business Marketing
- NFL Launches Second Annual Big Data Bowl Competition
- H2O.ai Announces Additional Speakers for H2O World New York 2019
- SAS Announces Keynote Speakers for Analytics Experience 2019 in Milan
October 11, 2019
Most Read Features
- What’s the Difference Between AI, ML, Deep Learning, and Active Learning?
- Kafka Transforming Into ‘Event Streaming Database’
- Big Data File Formats Demystified
- Cloudera Begins New Cloud Era with CDP Launch
- Is Python Strangling R to Death?
- 10 Big Data Trends to Watch in 2019
- Hadoop Has Failed Us, Tech Experts Say
- Can We Stop Doing ETL Yet?
- How to Build a Better Machine Learning Pipeline
- Is Hadoop Officially Dead?
- More Features…
Most Read News In Brief
- Kafka Spawns Open-Source KarelDB
- California’s New Data Privacy Law Takes Effect in 2020
- HPE Acquires MapR
- War Unfolding for Control of Elasticsearch
- What to Expect at Strata This Week
- Global DataSphere to Hit 175 Zettabytes by 2025, IDC Says
- Data Lakes Get Structured
- Presto Moves Under Linux Umbrella
- IBM Adds Kubernetes Operator for CouchDB
- MapR Says It’s Close to Deal to Sell Company
- More News In Brief…
Most Read This Just In
- Datanami Announces 2019 Readers’ and Editors’ Choice Award Winners
- Databricks Partners with Tableau: Enabling Organizations To Run Business Intelligence on Data Lakes Faster and More Reliably
- Biogen Chooses Lexalytics to Improve Customer Care with New AI-Based Text Analytics System
- Aimia Selects H2O.ai Driverless AI for its Customer Loyalty Programs
- TIBCO Announces Beta Program for its Cloud-Native Enterprise Metadata Solution
- DataVisor Named a Global Leader in Cloud Computing
- Aerospike Now Supports Intel Ethernet 800 Series with Application Device Queues Technology
- DataRobot Enhances Enterprise AI Platform, Further Automating the Path from Data to Value
- New Cloudera Study Shows Current Enterprise Data Strategies Ineffective
- ArangoDB Extends Open Source Solution with ArangoML Pipeline
- More This Just In…
October 20 - October 22Charlotte NC United States
October 23 - October 24Berlin Germany