There is nothing normal about risk and uncertainty

There is nothing normal about risk and uncertainty

Much of risk and all uncertainty are inaccessible to mathematical tools. Alternate analytical methods are needed.  Standard models in financial economics – including risk models – are under great scrutiny. This is not just a problem for theoreticians. These models have important practical consequences for risk management. Their reassessment is essential if the practice of financial and risk management is to advance.

As Morningstar Advisor magazine pointed out, after 2008 “investors discovered once again that the odds of experiencing significant losses are greater than the common models…suggest.” The common models suffer from many inadequacies. Normal distributions cannot possibly capture major downside risks – they lack “fat tails.” Some distributions are analytically inappropriate for modeling financial and risk events. Models are often too complicated for their own good. They rely on too many assumptions, many of which are demonstrably unrealistic. Other problems abound.

Field experiments in finance

The results are crippling. The Great Recession and the Crash of 1987 were both wonderful natural experiments in stress testing standard models. Most failed. The crises’ lessons went largely unlearned. Despite the drubbing reality dishes out, many financial theorists continue to believe their models are accurate depictions of how the world works.

Empirical evidence says this is not the case. As this analyst pointed out in an earlier post, there are some empirical problems with major financial theories. A recent paper argues “most claimed research findings in financial economics are likely false.” The paper concludes: “The assumption that researchers follow the rules of classical statistics…is at odds with the notion of individual incentives which, ironically, is one of the fundamental premises in economics.”

Wanted: empirical methods

Wassily Leontief came to a similar conclusion in his 1970 Presidential Address to The American Economic Association. Leontief, a 1973 Nobel laureate, lamented a “continued preoccupation with imaginary, hypothetical, rather than with observable reality….Empirical analysis…gets a lower rating than formal mathematical reasoning.”

He described a serious flaw in economic research: “the validity of these statistical tools depends itself on the acceptance of certain convenient assumptions pertaining to stochastic properties…; assumptions that can be seldom verified.” Leontief observed, “in no other field of empirical inquiry has so massive and sophisticated a statistical machinery been used with such indifferent results.”

To the degree the theories informing risk decisions “are likely false,” their impact on strategic risk management is deleterious. The management of day-to-day risks – to say nothing of Black Swans, highly unlikely events with huge negative impacts – is rendered less effective.

Concession to reality

Happily, some practitioners are getting the message. No less a bastion of free market capitalism than Goldman-Sachs has recognized theory’s limitations.

In a 2014 study on too-big-to-fail banks, Goldman Sachs analyzes risk and return using different models for “normal times” and for times of “systemic financial crisis.” This tweak is designed to incorporate fat tails into risk analysis. Still, it is a concession to reality and a significant admission that standard financial models have limited use. Unable to shed conventional theory entirely, G-S states, “In normal times, the probability distribution should be akin to a log normal distribution.” A previous post addressed the problems inherent in this approach.

In an effort to reconcile theory with reality, Goldman Sachs uses “in times of systemic financial distress, (a) probability distribution (that) should provide for more extreme results, such as a log-T distribution with ‘fat tails’.” The log-T is a variant of the Cauchy distribution, a method advocated decades ago by Benoit Mandelbrot. Cauchy’s fat tails can be seen in the illustration below.

Conventional theories assume risk and uncertainty are quantifiable, and returns are normal or lognormal. Unverifiable assumptions, unnecessary complexity, and a reliance on mathematical gymnastics are just part of the problem. Still, these theories form the core of risk analysis methods that guide the daily decision-making of many corporations and organizations. In his book, The (mis)Behavior of Markets, Mandelbrot attributed the persistence of these inadequate theories to “habit and convenience. The math is, at bottom, easy and can be made to look impressive….It gives a comforting impression of precision and competence.”

In Mandelbrot’s view, analysis that attempts to restrain risk – or variance, or volatility, however you look at it – to a finite probability distribution are misleading and dangerous. Financial or economic theory based on the normal distribution, he wrote, “is a house built on sand.” To him, the normal distribution represents “mild randomness,” a condition Mandelbrot considered abnormal in financial markets.

Mandelbrot wrote in “A Case Against the Lognormal Distribution” that the lognormal represents “slow randomness,” a condition uncharacteristic of economics and finance. He called the lognormal distribution “a wolf in sheep’s skin” which is “dangerous to use.” He concluded that in economics “predictions drawn from the lognormal are too confused to be useful.” Goldman-Sachs’ clients, take note.

Mathematical limits

Mandelbrot considered financial markets examples of “wild randomness.” He describes markets characterized by turbulence, volatility that is itself volatile. This idea that risk is compounded fits neatly with the idea that returns are compounded, the basis of the time value of money.

But “wild randomness” has a disturbing implication which theory has been very reluctant to accept. Variance itself is infinite. Seven years before Leontief’s address, Mandelbrot wrote, “to achieve a workable description of price changes…it is necessary to use random variables that have an infinite population variance.” This idea makes sense. Does anyone suppose uncertainty, a function of time, is finite? The assumption of limited variance is not empirically valid.

Back to Leontief: “It is precisely the empirical validity of these assumptions on which the usefulness of the entire exercise depends. What is really needed…is (an) assessment and verification of these assumptions in terms of observed facts.” He stated flatly, “Here mathematics cannot help.”

Categories: Finance, International

About Author

Steven Slezak

Steven is on the faculty at Cal Poly in San Luis Obispo, California, where he teaches finance and strategy. He taught financial management and financial mathematics at the Johns Hopkins University MBA program. He holds a degree in Foreign Service from Georgetown University and an MBA in Finance from JHU.