Follow Datanami:
March 25, 2020

‘Inclusive’ Approach Seen as Key to Trusted AI

via Shutterstock

Trustworthy AI for enterprise applications remains elusive, frequently due to poor-quality or siloed data. The result is little confidence in early AI deployments, a vendor survey found.

Dataiku, the enterprise AI platform specialist, used the results its survey of about 400 data scientists, analysts and AI application users to make the case for “inclusive AI.” That approach not only democratizes data but improves quality and, with it, the potential for scaling AI projects into application users’ trust.

The survey found that that 52 percent of respondents have frameworks in place to help ensure data quality in hopes of developing trusted AI applications that scale. Along with trusted data, key considerations for machine learning developers include AI explainability and ethical use of algorithms.

The problem begins there, the survey found, with 57 percent of respondents saying they unsure how machine learning projects were being used and whether they were being applied responsibly. Thirty-five percent they were working to ensure ethical use of enterprise AI projects.

“Trust in AI projects will continue to present significant challenges if we are still tackling fundamental issues such as data quality, as well as more complex problems associated with ethics,” said Dataiku CEO Florian Douetteau.

Dataiku and other platform vendors pitching tools designed to allow data scientists and analysts to use “dirty” data in machine learning projects assert that trusted AI depends on improving data quality. “This starts with trust in the data itself that is being used in AI systems,” Douetteau said. “Data quality is one of the most basic but most important hurdles to overcome in the path to building sustainable AI that will bring business value, not risk.”

It’s akin to frequency hand-washing in a pandemic.

For now, most organizations appear to be falling short of those requirements. The Dataiku survey found that just 11 percent of potential AI application users defined as “non-manager, non-technical” employees thought the technology would transform their jobs. The company said that represents a much lower percentage than more sanguine senior-level managers.

Overcoming the pervasive AI trust gap starts with a combination of improving data quality while expanding access across organizations. That starts with greater collaboration in AI development in order to take advantage of a company’s diverse skills sets and use cases. “There are clearly some questions around trust, responsibility and inclusivity which need addressing before AI can have the optimal result,” Douetteau said.

The company’s Data Science Studio platform scored well in recent rankings of machine learning tools, cited in a Gartner survey for fostering collaboration among data engineers and scientists. Gartner also lauded automation the platform brings to machine learning workflows, as well as the management and monitoring of models once they’re in production.

Dataiku was founded in 2013 to address what Douetteau said was “the fragmented data science ecosystem [by promoting] collaboration amongst users and to navigate the journey to successfully implementing enterprise AI.”

Recent items:

AI Bias a Real Concern in Business, Survey Says

The ‘Big Bang’ of Data Science and ML Tools

Datanami