Follow Datanami:
November 29, 2021

You’ve Been Warned–Bad Data Models are Capable of Destroying Companies

Josh Poduska

(WHYFRAME/Shutterstock)

The field of data science has already delivered incredible results in finding business problems and creating models which generate market insights. But bad or failing models can also deliver erroneous results which can destroy business opportunities and tarnish corporate reputations. According to a recent study that polled 300 U.S. data science executives, an alarming 82% of data executives are concerned about major revenue loss or a hit to brand reputation caused by bad or failing models, highlighting a need for model risk management.

Data science leaders can’t put their heads in the sand when it comes to maintaining data science models–the stakes are too high. A survey by Accenture showed that 75% of executives believe their companies will most likely go out of business if they can’t scale data science successfully within the next five years.

If we start with the assumption that these models are here to stay, and you’d better do it right,  or face the digital graveyard, let’s break down the problem. What’s really at stake? And how can we negate the risks and maximize the benefits?

Data executives say the key dangers of unimproved models include: making wrong decisions and using incorrect KPIs; loss of productivity; security and compliance risks and discrimination and bias in an AI model. Here are some examples that should give you caution.

Wrong Decisions and Incorrect KPIs

  • Wrong business decisions that lose revenue (46%).
  • Faulty internal key performance indicators that impact staffing and compensation decisions (45%).

One of the biggest risks of any data science project is that faulty data will drive unexpected predictions. Data is based on the past. If inequalities existed in the past, it’s easy for a model to reinforce conditions like unequal pay or gender biases. Look at Amazon’s discarded recruiting model. The AI model looked at all the men getting jobs at Amazon, and deranked graduates from women’s colleges, or even someone who mentioned they were the captain of a women’s chess team.

What if your KPIs are wrong? (NicoElNino/Shutterstoc)

If an AI project is using the wrong signals during machine learning, the end result will suffer. As PWC put it, “It can be too easy for people to let subtle, unconscious biases enter, which AI then automates and perpetuates. That’s why it’s so critical for the data scientists and business leads who develop and instruct AI models to test their programs to identify problems and potential bias.”

Clearly defining business goals and KPIs is a crucial early step when developing a model, and bringing data and business teams together will lead to better results. It may take sales, marketing AND data science teams working together to evolve a data model from predicting which audiences would Like a post to determining which messages will get specific market segments to purchase – the real business goal.

Loss of Productivity

  • 33% of data executives say not improving models can result in loss of productivity or rework

Before the COVID-19 pandemic, the data science team at Instacart had a very successful model for predicting product availability, reaching 93% accuracy. As a consumer, getting the products you’ve ordered is even more important than how soon the order will arrive.

Models must be retrained frequently to maintain accuracy (Varlamova Lydmila/Shutterstock)

When lockdown orders started, services like Instacart became essential, but hoarding toilet paper and hand sanitizer knocked down model performance to 61% accuracy. Instacart quickly retrained their model using a smaller dataset from the pandemic period, so they could reliably deliver the products customers actually wanted.

Security and Compliance Risks

  • 43% of executives say not improving models can lead to security or compliance risks

Running your data science team so you can respond to market changes and recruit the best team is super crucial. But as AI takes a bigger role in health care, and governments demand more information on the internal workings of a data model, the stakes are even higher. If there are errors in the medical records, or in the training sets, the consequences could be fatal.

This risk keeps executives at small health startups waking up in a sweat when they consider the impact of their work on real patients and real lives. IBM Watson spent more than $5 billion to improve oncology diagnostics before finding out that doctors discern information differently than AI models. A decade later, the company has shifted to rooting out bias in advertising, a more modest goal.

To help AI researchers, the World Health Organization published a set of guidelines for AI design and use. One of the tenants is “Ensuring transparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology.”

Bias is a real threat to data models (Lightspring/Shutterstock)

This need for transparency and trackability becomes even more important as governments get tougher on algorithmic bias, with two laws pending in the US Congress and in New York.

Discrimination and Bias

  • 41% of executives say a dated model could result in discrimination and bias.

I mentioned the challenge that data science projects may misinterpret the world today and make flawed predictions about the future. You’ve likely read about AI models that analyze someone’s ability to stay out of jail. But some models still predict that a defendant who doesn’t have a landline phone is less likely to show up in court – these models need updating for today’s world.

If a dataset isn’t inspected and validated, a model can easily inherit bias. GPT-3 had a major issue when generating text about Muslims at first. Prompts like “Two Muslims walked into” were completed with violent text like “a synagogue with axes and a bomb.” When Stanford researchers swapped “Muslims” for “Christians,” the AI went from providing violent associations 66 percent of the time to only 20 percent of the time.

This is a poor picture of how Muslims are viewed on the Internet, but it also shows the potential for model risk management to deliver better outcomes. OpenAI was able to retrain the model by adding a smaller set of only 80 Q&A text samples to the GPT-3 model, which dramatically reduced bias according to a pre-publication paper.

Conclusion

We’ve talked about how data models can reinforce historical biases, make incorrect medical predictions, or lead to snafus which can impact the bottom line and damage corporate reputations.

So what is the solution to these problems? You can’t just take a “one and done” approach when designing and implementing data models – these are living, evolving projects. Model risk management helps companies continually update their projects so the models continue to add value. Today 23% of models are never improved once they reach production. AI is too important to let this go – leaders have to do a better job updating and improving their data science models so we have an accurate and fair representation of the world, and continue to advance the field of data science. There’s just too much at stake.

About the author: Josh Poduska is the Chief Data Scientist at Domino Data Lab. He has 20 years of experience in the analytics space. As a practitioner, he has designed and implemented data science solutions across a number of domains including manufacturing and public sector. As a leader, he has managed teams and led strategic initiatives for multiple analytical software companies. Josh has a Masters in Applied Statistics from Cornell University.

Related Items:

Hacking AI: Exposing Vulnerabilities in Machine Learning

Is Bad Data Costing You Millions?

10 Signs of a Bad Data Scientist

 

Datanami