Follow Datanami:
November 29, 2022

Solving for Data Drift from Class Imbalance with Model Monitoring

Krishnaram Kenthapadi

(INGARA/Shutterstock)

The amount of data created within the next five years will total twice the amount ever produced, according to IDC. Not only will this data hold a wealth of knowledge and insights that businesses can leverage to enhance decision-making, but it will allow enterprises to grow their bottom lines and create value where it is needed most. Unfortunately, large datasets can be unruly and unstructured, making them hard to maintain and monitor for quality assurance. And when it comes time to identify a unique trend within that data, it can be like finding a needle in a haystack.

The good news is that advancements in artificial intelligence (AI) technology are making it easier for companies to analyze and draw inferences from patterns in these vast datasets. More specifically, machine learning (ML) systems are helping businesses forecast market trends, personalize advertisements, and approve customer financing, among countless other use cases.

But ML is not exempt from errors and falsities. ML models degrade over time and become less accurate. This is also known as data drift, which refers to changes observed in the model’s data distribution. To address this, businesses are implementing model monitoring to check for data drift continuously and make certain that models reflect the most accurate data available.

However, in cases where there is a high imbalance in class occurrences, known as “class imbalance,” detecting data drift is more complicated. Model training data must include the ground truth for expected classes. In use cases with class imbalance, the frequency of occurrence of one or more of these classes (the minority class) is substantially less than the others (the majority class). So for a ML model that is trying to predict a fraudulent transaction, which there are typically only a handful of among hundreds of transactions, this corresponds to very few cases of fraud (positive examples) among a sea of non-fraudulent cases (negative examples). Monitoring models with class imbalance like this is crucial because AI operations can only be successful if companies know that ALL their models are resilient, meaning they can respond to data shifts in less frequently occurring incidents.

(3D character/Shutterstock)

Let’s dive into the importance of detecting data drift from class imbalance and uncover the ways ML teams can better monitor for it.

Understanding Class Imbalance

Class imbalance is fairly common and impacts companies across many industries, including finance, retail, manufacturing, and education. The term refers to a data set with skewed class proportions, which can change over time. One challenge for MLOps teams is creating algorithms that can adapt constantly to incoming data as streams evolve and become outdated. This is where detecting data drift comes into play.

While there are many drift detectors out there, all of them assume that they are dealing with roughly balanced classes. As a result, imbalanced data streams pose a problem. In these cases, detectors are biased towards the majority classes, ignoring changes happening in the minority groups. Furthermore, class imbalance can shift gradually, and classes may even change their roles (e.g., majorities become minorities and vice versa). Picking up on these changes is paramount.

Models for fraud detection, for example, need to uncover drift in the minority class since fraud events happen sporadically. Inability to identify unusual drift in minority classes can cost companies millions of dollars. Due to its lower frequency, the minority class typically has an outsized impact on the business, and a small shift in fraud can change business outcomes. Therefore, ML workflows need to account for this data imbalance across all the stages of the model development lifecycle.

Monitoring for Drift in Class Imbalance

In order for ML teams to detect and solve for drift in class imbalance, they’ll need to implement one or more of the following techniques:

(Pdusit/Shutterstock)

  • Segment based on prediction score, where the reference and production histograms are generated by segmenting on the prediction score by class along with other potential model inputs.
  • Segment on ground truth, where the user defines segments based on labels and computes model drift on these segments. Segmenting on ground truth allows teams to evaluate concept drift over time.
  • Weighting model predictions, where teams can undo the effect of the imbalance in the reference data by multiplying each event in the production data according to their predicted class using an associated scale factor. Because of its simplicity, predictability, and global visibility, weighting model predictions often yields the best results. It allows users to easily configure drift measurement that is sensitive to the parts of the distribution that are most important for the specific application.

To bring these techniques to life, organizations are turning to Model Performance Management (MPM) technologies. MPM tracks and monitors the performance of ML models through all stages of the model lifecycle. Similar to a centralized control system, MPM is at the heart of the ML workflow, tracking and examining model performance to alert ML engineers of any errors or potential issues that need to be addressed.

Improving decision-making clarity through robust model monitoring

Monitoring models with class imbalance is of particular importance because being able to detect even the smallest of data drift instances in imbalanced data sets can save organizations millions of dollars. Teams that monitor for class imbalance and nuanced model drift using the model monitoring approaches described above will ultimately achieve the highest degree of accuracy and fairness, leading to better outcomes for the organization. These outcomes also serve users and customers well, ensuring that no group is unjustly elevated above the rest.

About the author: Krishnaram Kenthapadi is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and ML monitoring platform. Previously, he was a Principal Scientist at Amazon AWS AI, where he led the fairness, explainability, privacy, and model understanding initiatives in Amazon AI platform. Prior to joining Amazon, he led similar efforts at the LinkedIn AI team, and served as LinkedIn’s representative in Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board.

Related Items:

Staying On Top of ML Model and Data Drift

Keeping Your Models on the Straight and Narrow

Keeping on Top of Data Drift

 

 

 

Datanami