Follow Datanami:
August 28, 2018

Weathering the Next ‘Great’ Storm in the Market

Bill McCoy and Henri Waelbroeck

(JMiks/Shutterstock)

This is not an article about the joys and wonders of machine learning or big data. Instead, this article is about a pernicious, unsolved problem in investment finance we continue to ignore, for which machine learning and big data may represent one probable solution.

Every time there is a correction in the stock market, or a recession in the economy, there are doomsayers who proclaim that the inevitable next step is a “great” recession, or perhaps even a depression. While corrections and recessions may lead to financial crises, a more certain intermediate step is the collective fear that all market participants are (or at least may be) tainted, a fear that negatively impacts trading volumes and liquidity at a time when all market participants seek to rebalance their portfolios.

This vicious cycle could, of course, be broken if no one had to trade in that moment; however, regulations designed to stave off further pain can sometimes force trading under such unfavorable conditions. An unfortunate side effect of well-intentioned regulations, this kind of trading in crisis is considered (il)liquidity risk as portfolio managers seek to bring their holdings into a new definition of compliance.

(janews/Shutterstock)

Unfortunately, the current state of liquidity risk modeling is not up to the task of anticipating such events and informing decision making. The definition of the problem varies from participant to participant. The assumptions of linearity and normal probability distributions just aren’t accurate. The data itself is often hidden or unavailable.

Fortunately, this is not a call to arms to begin stockpiling supplies for some impending doomsday. Work is underway to modify analytical models to better model joint tail dependencies, including academic proposals such as nested factor models, and enhancements to commercial risk models, including some that use non-pseudo-elliptical copula models to simulate the effect of market turbulence on fat tail risk.

Another part of the solution is the development of powerful machine learning (ML) methods that combine many sources of information into, for example, an estimate of the probability of future events that the market might not be accounting for. Even if improved analytical models are able to predict the outcome of certain scenarios, such a capability would be useless if we didn’t know what kind of scenarios we need to worry about, and what their relative probabilities are.

Big data has permeated the financial industry in many areas: generating alpha models, optimizing trade execution, and estimating news relevancy, just to name just a few. Thus far, however, it has not had much impact on risk models. A better understanding of risk as a predictive methodology is required before the next stampede; we will argue here that recent advances in machine learning and big data are just what is needed to accomplish this end.

Big data techniques enable systematic screening for relevant information to predict the relative likelihood of various scenarios. One problem that complicates the application of machine learning in finance is that the underlying system changes over time. This problem is called “concept drift” in the machine learning world, and it affects conventional risk models as well as ML models.

Fortunately, recent developments in ML applications, such as alpha profiling in trade execution, have led to techniques that help identify features that are more resilient to concept drift, leading to  improved generalization power. In addition, coupling scenario probability estimation to scenario-specific coefficient estimates can yield a class of models that is effectively able to automatically “switch” between different behaviors as potentially catastrophic events unfold. Algorithm switching is now a well-established technique in institutional trade execution, for example. Absent a “theory of everything,” perhaps what is most needed is a diverse ecosystem of models together with an understanding of each model’s validation domain.

But why should a portfolio manager focus attention on risk models, instead of simply focusing on generating alpha? If burgeoning financial crises can require many asset managers to adjust their portfolios in the same way at the same time, a model that is able to anticipate such a change can help prepare a portfolio for such a wave, and thus better navigate its effects. This has value as a defensive tactic, to avoid painful liquidations under stress, but also as a source of alpha: a manager able to anticipate a liquidity crisis can both avoid the liquidations and position her portfolios to take advantage of the mispricings that will develop during and following the crisis.

Machine learning can reveal non-stationarities in risk model coefficients, and the time derivative of risk is alpha. Thus, the first adopters of machine learning-enhanced risk models may be portfolio managers rather than risk managers. Of course, this is all speculation on the part of the authors. We will only fully be able to verify our assertions when we can ask the survivors of the next financial crisis how they navigated troubled waters, and how they prepared in the doldrums.

About the authors: Bill McCoy is a Senior Vice President in the Analytics business unit at FactSet, a provider of financial solutions. In this role, he actively works in research, client support, and sales to help the firm enhance its position as a leading provider for comprehensive valuation and risk analytics for fixed income securities and the derivatives used to hedge them. . Prior to FactSet, Bill worked for other fixed income software vendors as well as in fixed income portfolio management. 

Henri Waelbroeck, Ph.D., is Vice President, Director of Research for Portfolio Management & Trading solutions at FactSet. Previously, he served as the Global Head of Research for Portware, a FactSet Company. Waelbroeck leads the firm’s Alpha Pro research, applying machine learning and artificial intelligence to optimize execution management. Prior to joining Portware, he was Director of Research for Aritas Group, Inc., co-founded Adaptive Technologies Inc., and served as Research Professor at the Institute for Nuclear Sciences at UNAM, Mexico.

Related Items:

How Four Financial Giants Crunch Big Data

Graph Databases Hit Wall Street

Financial Statements Now Audited by Big Data

Datanami