Follow Datanami:
May 31, 2017

Data Quality Remains Low, Report Finds

(Aleutie/Shutterstock)

Even as corporate spending on data collection and analytics increases, confidence in the quality of data is decreasing as CEOs worry that messy data and questionable results will hamper future technology efforts centered around artificial intelligence and the Internet of Things, a new report warns.

The report released Wednesday (May 31) by Forbes Insights and KPMG International found that fully 84 percent of CEOs interviewed expressed concern about data quality used to make strategic decisions. That lack of confidence is driven by data siloes as well as the inability to integrate massive volumes of legacy data, the report found.

“Poor-quality data is a huge problem,” Bruce Rogers, chief insights officer at Forbes Media, noted in a statement releasing the report. “It leaves many companies trying to navigate the Information Age in the equivalent of a horse and buggy.”

The report draws a parallel between data and oil. “Data today is often compared with oil, as in its raw form its uses are limited. It is through refinement that oil becomes useful as kerosene, gasoline and other goods.

Similarly, the report notes, “it is through the refinement process of cleansing, validation, de-duplication and ongoing auditing that data can become useful in the kinds of advanced analytics that are starting to shape our world.”

The costs associated with poor-quality data are growing. Market analyst Gartner Inc. estimates the average financial impact of iffy data at about $9.7 million annually.

Among the benefits of improving data quality is avoiding millions of dollars in fines for failing to comply with data governance regulations. Compliance issues have grown with evolving data privacy and governance rules, especially for global financial services firms, the report notes, placing a higher premium on data quality.

Other costs associated with poor data quality range from lost sales due to inaccurate customer data to reputational damage.

The report also found that 41 percent of large enterprises surveyed by KPMG said they are making data quality and analytics a priority. Those concerted efforts are seen as a way of improving decision-making, thereby reassuring data skeptics. “When outputs are reliable, guesswork and risk in decision-making can be mitigated,” the survey concludes.

Data discovery and inventorying are among the first steps toward improving data quality. That involves determining the types of data owned by an enterprise, where it is stored and in what format. The study then recommends validating and standardizing records along with data cleansing.

The final step involves benchmarking and data auditing, described as an ongoing process as enterprises shift from batch processing to real-time validation and de-duplication of repetitive data.

“It’s not one and done,” noted Aaron Wallace, principal product manager for customer information management at Pitney Bowes (NYSE: PBI), which co-sponsored the data quality report.

Recent items:

Data Quality Trending Down? C’est La Vie

Big Data Fabrics Emerge to Ease Hadoop Pain

Datanami