Research

Is VaR to Blame for the Downturn?

Author and specialist financial derivatives Nassim Nicholas Taleb was recently quoted extensively in a New York Times article titled “Risk Mismanagement”. He makes some valid points with regard to the usefulness of risk metrics at times of extreme market behaviour. But while VaR certainly has its laundry list of problems, Taleb takes VaR out of context by focusing on only one version : the Gaussian-based parametric VaR which he rightly points out is severely constrained by the dangerous assumption that asset returns follow a normal bell-shaped distribution.

In fact, he even goes so far as to say that VaR was highly responsible for the current financial crises. This is rather disturbing, as his claims seem to have gained a wider currency, thus detracting from the infinitely more important issues behind the crisis. If we look back in history, we can see quite clearly that most “blowups” were not due to poor allocation decisions based on an over-reliance on risk measurement and optimisation models, but were about leverage, unchecked greed, operational disaster and outright fraud.

While VaR is a requirement for a bank, most traders and fund managers would laugh if you asked them if they took VaR very seriously. The reality, alarmingly, is that risk managers have hardly any clout when it comes to strong-arming a trader or liquidity.

Risk manager warnings are often ignored or overridden as senior management tends to focus purely on profitability, not risk. This is not a risk model problem, but a corporate governance problem. Instead of bashing risk managers, we should be giving them more independence, capabilities and authority to identify and limit excessive risk-taking.

Long-Term Capital Management (LTCM) was leveraged 100x at one point and Bear Stearns’ credit hedge funds over 40x. A simple cap on gross exposure would have helped to avoid the problems they encountered with leverage. Of course, this would have interfered with a strategy that depended heavily on leverage to boost minuscule returns.

In the 1990s, Nick Leeson at Barings, the Orange County debacle, events in Mexico and Korea – all of these events had excessive leverage in common. The problems that lie within VaR are its inability to fully capture leverage and liquidity risk. Good risk managers are fully aware of this shortcoming and, as a result, VaR is the only one in a whole repertoire of tools, both quantitative and qualitative, that risk managers use to get a sense of the risks they are taking on.

Taleb gives the impression that risk managers are only managing risk according to Gaussian principles, where probabilities are assumed to be normally distributed. There is more to the story than he lets on. Interestingly enough, Taleb seems to be a big fan of Monte Carlo simulations (a method that does not need to assume normality in asset return distributions) as seen in his use of Monte Carlo in the book ”Fooled by Randomness.”

Taleb suggests Monte Carlo simulators allow us to learn from the simulated future which is superior to learning from the past, because the past has a survivorship bias, and we also tend to denigrate the past by claiming misfortune had by others will not happen to us. Most sophisticated risk managers use Monte Carlo very much in the same way as he does.

To his credit, though, managers at LTCM were proponents of parametric VaR, which severely underestimated risk and was largely responsible for its catastrophic collapse. But even in the case of LTCM, leverage was the real culprit, just as it is in today’s financial crisis. VaR or no VaR, being leveraged 100x can only lead to eventual disaster should markets move against you and liquidity dries up. No fancy risk metric is needed to logically arrive at this conclusion, just common sense. The problem with LTCM is that it relied too much on VaR, and was lulled into a false sense of security pertaining to risk by not properly addressing the inadequacies of VaR via other means, such as stress testing.

Philippe Jorion, considered the foremost authority on VaR, says VaR is like having a wobbly compass in a dense forest. It can point you in the right direction, but it will never give you the exact coordinates of where you need to get to. Instead, you will still need to be acutely aware of your environment, being mindful that unforeseen pitfalls can occur along the way. VaR will not get you there alone. It needs to be coupled with other tools. In addition, you need to pay close attention to your instincts while, at the same time, incorporating a healthy dose of scepticism.

To highlight Taleb’s primary gripe with VaR (specifically Gaussian parametric VaR), we conducted a back-test whereby for each day during December 2008, we calculated 1-day parametric and Monte Carlo VaRs at both the 95% and 99% confidence levels for a portfolio with an Asian long/short focus trading primarily in cash equities, index futures and options (puts and calls). It is worth noting that a back-test should be longer than one month, usually a year or so. But even with a month’s worth of data, we can clearly see some significant results.

Fat-tail risk

As seen in table 1, the results of the back-test shows that parametric VaR does indeed fall short in capturing fat-tail risk due the number of breaches that occurred when comparing the VaR forecasts vis-à-vis the daily static profit and losses of the portfolio. Taleb would use this as a stick to beat risk managers and software vendors with. But if we also look at Monte Carlo VaR, while not perfect, we will see that it fares much better, even in these highly volatile and turbulent times, at capturing fat-tail risk.

Table 1: Back-test showing Monte Carlo VaR vs Parametric VaR


Click on the image for an enlarged preview

Statistically, at the 95% confidence level, we would expect one breach in every 20 trading days. Using the parametric approach, there were four breaches in just under 21 days – this is absolutely unacceptable.

Furthermore, at the 99% confidence level, we should only see one breach in every 100 trading days. Already, in just 20 days, there was one breach using the parametric approach. In this instance, a risk manager should not trust the parametric VaR figures because the results clearly show that this method has failed the back-test miserably.

It is highly likely that the reason there were so many breaches using the parametric approach is because it assumes that historical daily returns tend to be normally distributed, ie following a bell-shape pattern over time – this is the supposed eureka argument Taleb has been heralding to the media.

Empirically, it has been shown that parametric VaR is good at capturing and approximating the risks of assets that exhibit linear delta-1 payoff functions such as cash equities, swaps, futures, etc. Even in the best of times, under less volatile market conditions, parametric VaR tends to underestimate risk when the portfolio includes assets with non-linear or asymmetrical payoff functions such as options (puts/calls) and other contingent claims.

Indeed, the hypothetical portfolio used in table 1 includes options (primarily risky written calls) in its portfolio where they exhibit asymmetric return distributions with fatter left tails. Therefore, parametric VaR, which assumes symmetry in the right and left tails, would necessarily underestimate risk in this case because of its thinner tails, thus resulting in a high number of breaches in static profit and loss (P&L) as against the VaR forecast.

However, we can also see that Monte Carlo fared much better than its parametric counterpart. This is because it is able to capture the risks associated with non-linear payoff functions inherent in option strategies because, unlike the parametric approach, it does not assume normality in asset returns. It can be seen above that there were no breaches at the 99% confidence level and only one breach at the 95% confidence level – well within statistical expectations.

Figure 1: Parametric Normal Distribution vs Asymmetric Distribution

In figure 1 above, we show graphs of both parametric normal distribution and an asymmetric distribution. As we can see, the asymmetric distribution has a fatter left tail, which means there is a possibility of incurring larger losses.

This asymmetric distribution could be reflective of the portfolio used in table 1 which contains written call options, which would explain the existence of fatter tails due to unlimited exposure to the downside resulting from upward price movements which could potentially offset more than the premiums collected for writing the call options. As such, parametric VaR would underestimate VaR for this portfolio because it would wrongly apply the normality assumption when calculating VaR.

Monte Carlo, on the other hand, can handle asymmetric return distributions, thus being better equipped to estimate fat-tail risk. If Taleb conceded that he wrongly pigeon-holed his attack on VaR to the Gaussian parametric method, he would still raise another complaint. The complaint would be that VaR, accurate or not, does nothing to tell you how much you could potentially lose past a certain confidence level, say 99%.

Assume we calculated that the one day Monte Carlo VaR at the 99% confidence level was equal to US$100,000 (€79,000). In layman’s terms, this means that we can be 99% certain that we will not lose more than US$100,000 over the next trading day. According to Taleb, what it does not say is how much we would be susceptible to losing for that other 1% of uncertainty.

Taleb would be absolutely correct to make this criticism of VaR. For that 1%, we are indeed uncertain about the amount we could possibly lose, ie we could lose US$150,000 or US$500,000 or some other unspecified amount – whatever the case may be, we have no way of knowing and our typical VaR calculations would do nothing to tell us otherwise about that critical 1% of the distribution. This is where the concept of Conditional VaR (CVaR), also known as expected shortfall, can try to fill in the gap.

Conditional Value at Risk

For CVaR, we again turn to the use of Monte Carlo simulations which, at the risk of sounding technical, utilise geometric Brownian Motion within the Weiner framework, while using an Ornstein-Uhlenbeck mean reversion process for volatility. The risk factors during the random simulation are equity, interest rates, currencies and volatility of volatility.

Without getting into further specifics, assume after having identified and mapped the main risk factors and calculating the co-variance constraints among such risk factors based on historical data, we then generated 1,000 simulations. Each simulation represents one outcome out of 1,000 regarding the possible paths to gains and losses the portfolio could take over the next trading day. Once the simulation is finished, these outcomes can then be arranged into a histogram format – showing the worst losses in the left tail and the best gains in the right tail. The 99th percentile for the left tail of the histogram would be interpreted as the Monte Carlo VaR at the 99% confidence level and would represent the 11th worst loss in the overall simulation.

Continuing with the example above, recall that we assumed a Monte Carlo simulation generated a VaR at the 99% confidence of US$100,000. Because we have a finite set of simulations, ie 1,000 simulations, there are still 10 other outcomes in the overall histogram distribution that resulted in a loss worse than the US$100,000 reported at the 99% confidence level. Assume that these losses in the left tail of the distribution are shown in table 2.

Table 2: Measuring CVaR

Given the above values, in order to calculate CVaR, we simply take the average of the 10 losses that exceeded the 99% confidence level during the running of the Monte Carlo simulation, ie the average of the values for histogram observations 991 to 1,000. This would lead to a result for CVaR of US$121,400. Ultimately, with CVaR, we can derive a statistically more robust number that gives us important information regarding the expected possible loss we might incur in the area of the left tail that goes beyond a VaR confidence level.

While Monte Carlo VaR combined with its corollary CVaR may provide greater insight into the riskiness of a portfolio, especially more so than parametric VaR, Monte Carlo is no panacea and also has its caveats, which are:

  • Computational complexity increases with the number of risk factors and trials.
  • Random simulations are not necessarily predictive of the future. Importantly, adding more trials does not necessarily improve the simulation.
  • Monte Carlo is built on the robustness of the embedded covariance matrix (ie which gives the covariance between factors). If the matrix is not applicable in terms of the future and/or fails under stressful conditions (this can be back-tested), the simulation is flawed.
  • The volatility parameter is usually kept constant over the simulation horizon. While this may be less consequential for a one day forecast of VaR, it becomes more so for longer forecast horizons, say 10 days. In the markets, volatility usually does not stay constant, but fluctuates, and sometimes largely, through the passage of time. There are ongoing attempts to address this difficult issue by modelling stochastic volatility using auto-regressive methods such as GARCH.

To address the above weakness, many risk managers implement historical scenario analysis to gauge how the current portfolio would have performed over a very extreme market event. There is some truth to the saying that history does repeat itself, and this is the guiding principle when using historical analysis.

Of course, there will always be new market events and the guiding principle here is that the past is not very predictive.

Compounding the problem is that even if a severely adverse market event occurs and shows striking similarities to another adverse event, we still would have no way of knowing beforehand when the adverse event was going to happen.

Although we may never know when the next big crisis will present itself, the risk manager and fund manager can still take steps to try and protect the portfolio. In this case, they are now required to truly think about unforeseen catastrophic events, no matter how remote, and how they could potentially impact the portfolio in case they actually do occur. This is where sensitivity or what-if projection analysis can help out.

Using what-if analysis, the risk manager can create user-defined stress tests where he would, in effect, shock various market risk factors by varying stepped degrees in order to try to unearth possible risks that were previously unknown or hidden. For instance, he may want to see what the impact on his portfolio would be if he shocked a particular yield curve in a non-parallel fashion, while concurrently creating spikes in volatility and shifting both currencies and credit spreads along a continuum of percentage intervals.

The end result will hopefully display possible P&L outcomes under hypothetical stressed situations. Coming up with meaningful what-if stress tests is perhaps one of the most difficult aspects in preventive risk management as gauging their effectiveness only becomes apparent after the fact.

The reader should also know that risk managers are looking at Extreme Value Theory (EVT), which specifically focuses on the fat tails seen in distributions – the area Taleb is most concerned about – in order to gain a better understanding of risk. In conclusion, while using financial risk metrics such as VaR responsibly can help one to understand the risks facing a portfolio; they should not be considered a crystal ball.

Furthermore, such metrics are no use without a good dose of qualitative skepticism, transparent and stringently adhered to compliance measures (eg stop loss, exposure and leverage limits, liquidity limits, etc) and a fundamentally sound corporate governance framework that allows the risk manager to effectively monitor, report and limit, if necessary, the fund manager’s risk taking.


Angus Hung and Michael Langton are directors of Hong Kong-based risk management consultants QRMO Ltd.