Follow Datanami:
August 5, 2013

Broken Premises: Why Models Fail and What to Do About It

Wendy Hou and Roxy Cramer

With the increasing variety, volume, and velocity of big data, business goals have become more ambitious, complex, and larger in scope. This, in part, explains the growing movement to use scientific modeling approaches. Along with the promise and potential of models are also the pitfalls associated with putting too much blind faith in their outputs. Models can fail suddenly with dramatic impact or they may consistently underperform, leading to significant costs over time.   

A recent example involves the models associated with Collateralized Debt Obligations (CDOs) or mortgage backed securities. It is now known that these models grossly underestimated the risk of widespread default among loans in a portfolio.

The models relied on assumptions that did not account for changing dynamics in the economy, but the continued boom in housing prices masked these inherent problems. In addition, the unstated assumption that the trend in prices would indefinitely continue was deeply flawed. When housing prices eventually corrected, the impact on the financial world and general economy was devastating.

While no model is perfect and no amount of front-loaded testing can prevent all types of break downs in models, there are steps and safe-guards that can be built into data analysis projects to help provide on-going protection from model failures. First and foremost, data analysis must be regarded as a cyclical process rather than as part of a linear, start-to-finish project, such as how software applications are often implemented.

The first crucial step in the process is understanding the business objectives. Business objectives provide the necessary context for the scientific models and can include:   

  • Develop market insights into the effectiveness of campaign activities;
  • Optimize product inventory levels to reduce overstocks and shortages; and
  • Reduce insurance fraud and speed up claim payments.

The next step is to identify what type of data is needed. Many projects involve data that is routinely collected, such as transactional data and customer information. For example, Amazon and other major internet retailers mine massive transactional databases, including clickstream and browsing histories in order to personalize ads.

These customized recommendations help improve sales by predicting a customer’s preferences. Predictive analytics, which uses a relatively mature science, is experiencing a modern boon with the surge of e-commerce and the growing abundance of data. Despite this abundance of data, every project has to contend with data quality and availability. For example, airlines are interested in counting how many people did not make a reservation because the ticket price was too high. Such data helps estimate demand and price elasticity in a given market.

However, since reservation channels do not generally record turn downs, airlines use demand from similar flights with lower-priced tickets as a proxy for the lost sales. Data proxies are common workarounds for unavailable data, but they must be carefully selected and monitored. For example, if a proxy flight undergoes a significant schedule change, then model forecasts based on that proxy may become less accurate. A routine test for large forecast errors would alert one to any such problems.

After defining business objectives and assessing the available data, the next step is to select from appropriate statistical models. Models can be very sensitive to data and its changes and it is important to be aware of their limitations. Sources of uncertainty include bias and variance in the model estimates and errors in the data or numerical computation. There are statistical methods that can be used in model selection to help quantify model uncertainty, such as bootstrap and cross-validation.  Both use Monte-Carlo simulation to obtain estimates of prediction errors and confidence bands for model parameters.

In statistical hypothesis testing, the level of significance is set to control the risk of making the most serious of false inferences. For example, a false negative for cancer is a more serious error than a false positive. By the same token, if serious potential model failures are identified, then measures like automated triggers for errors or outliers can be implemented to detect problems proactively. The process is then ready for the next step – model deployment.    

Data analytics is an on-going process. It is a cycle with specific stages meant to be repeated, monitored, and continually improved. Data analysis begins with business objectives, then understanding and preparing the data, using techniques to get a better estimate of the model’s uncertainty, weighing the errors, continuing to monitor, and always validating the results with the business objectives.

With this type of process built into application development, users will better identify when models begin to fail and can take corrective actions, such as adjusting parameters, changing inputs, or building completely new models with corrected assumptions. While this process seems complicated, there are sophisticated, commercially-available tools to help companies implement the steps in this process. Had Lehman Brothers followed some, if not all of these steps, they might have diversified before it was too late.

About the Authors:

Wendy Hou is an IMSL Numerical Libraries Product Manager at Rogue Wave.  Roxy Cramer is an IMSL Numerical Libraries Product statistician at Rogue Wave

Related items:

Treasure Data Gains New Steam for Cloud-based Big Data

The Power and Promise of Data Driven Medicine

Rainstor Offers Mastery of Time and Schema 

Datanami