Follow Datanami:
March 16, 2020

Data Science Fails: Building AI You Can Trust

Industries from insurance and healthcare to banking and retail are aggressively working to integrate AI and machine learning models into their operations to maximize profits, reduce customer churn, operate more efficiently, and gain a significant advantage over competitors. However, before any business can make AI technology a central part of their organization’s success, they must first be able to trust the technology.

Regulate Your AI Bias

Businesses need to make sure that any AI solutions they implement are free from human biases and are built using best data science practices. Toward this end, it’s important to take care to avoid common data science mistakes, including:

  • Don’t buy into hype: Make sure your data science team is not caught up in the hype of a new algorithm that is generating lots of buzz.
  • Choose the right AI model: Avoid algorithm bias by allowing a competition between a champion and a challenger model to decide which is the better option.
  • Leave presumption at the door: Don’t assume you know which algorithm is best for your data in advance.

Data scientists must understand that there isn’t only one algorithm that will work best for their data. Unfortunately, it is too common for data scientists to rely on a select number of models when conducting their research.

Adopt Best Practices for AI

A Science article published in October 2019 highlights how data science fails can impact the healthcare system. The article outlined how biases can impact how patients are treated by predicting healthcare costs rather than illnesses. When patients’ commercial risk scores were compared to active chronic conditions and split by race, it became apparent that African-Americans received lower commercial risk scores and therefore insufficient care.

This data science fail could have been avoided by abiding by the following practices:

  • Define your organization’s AI values: Establish the AI rules that your organization should value most and publish them as internal guidelines to inform the development and deployment of future AI models.
  • Build your own AIs: A third-party sourced, black box AI might not share your company’s internal values. By building your own AIs, your organization can ensure that decisions reached by your model are explainable.
  • Be careful what you wish for: When AI models deliver the results that you were hoping to receive, it’s a warning sign that the model can’t be trusted. For organizations to trust their AIs, they should clearly define their organization’s goals and test their AI model’s behaviors and try to understand how they arrived at their decisions.

Minimize Opportunities for Human Errors

Human beings are responsible for building AI models. As a result, the models can be influenced by common human mistakes. Take the case of a 2015 study that concluded that family religious identification decreases children’s altruistic behaviors; religiousness predicts parent-reported child sensitivity to injustices and empathy; and that children from religious households are harsher in their punitive tendencies. It turns out that  a coding error was responsible for the misleading results. When the error was fixed, it became clear that country of origin was most likely to predict the outcomes.

  • Watch for human mistakes: Typos can occur as a result of manual processes. Automated machine learning can reduce dependence on manual scripting and offer guardrails to identify possible errors.

Don’t let data science fails happen on your watch. Get your AI and machine learning ambitions right by learning what could go wrong. Download DataRobot’s whitepaper, Data Science Fails: Building AI You Can Trust, to prevent AI bias and implement trustworthy AI.

 

Datanami