News & Events

Risk Intelligence - Generating Long Term Profit within the Catastrophe (Re) Insurance Market

In considering the future economic potential of the catastrophe reinsurance market, I will examine current issues including rate adequacy, uninsurable and indirect losses, as well as the use of vendor models. I will discuss how new ideas, regulation and a new breed of scientists are changing the industry and conclude that the market will only be profitable in the long-term for a select group of leading reinsurers.

One of our investors recently asked whether the catastrophe reinsurance market will be profitable in the long-term. The question hinted at tail catastrophe risk, the accuracy of catastrophe models, and concerns about the ability of large events to wipe out profits for years. The short answer is, yes, the market will be profitable because: (1) the insurance market depends on catastrophe (re)insurance; (2) no other line requires similar levels of capital to stay afloat; and (3) large losses trigger capital shortages which are followed by premium, price increases and possibly by capital influx. For these reasons, catastrophe (re)insurance in general is unlikely to follow the fate of other (re)insurance lines which have remained in deficit for years. But there is much more to this­ story....

In 1992, Hurricane Andrew caused less than US$25 billion insured losses (in 2012 dollars, Insurance Information Institute) but resulted in the widespread failure of many regional insurance companies. While the catastrophe ILS market was not established in 1992, it is likely that Andrew would have caused extensive cat bond losses if the market had considered cat bonds at perceived 1992 risk levels. By contrast, in 2012, Windstorm Sandy caused US$20 to US$25billion of insured losses which were largely absorbed within the earnings of most (re)insurers. Total catastrophe losses of US$65 billion in 2012, however, felt ‘soft’ after over US$110 billion of catastrophe losses in 2011. For several catastrophe reinsurers, 2012 was an ‘average/mean’, and hence, highly profitable year in their enterprise risk models. ‘Average Annual’ losses depend on many factors and catastrophe losses are only one, although an important factor.

Our forecast of global expected average annual catastrophe losses is US$50 to US$60 billion. Within this figure, expected U.S windstorm average annual losses account for 30 to 40% of the overall catastrophe losses with Florida hurricane risk expected to make up more than half of the U.S windstorm risk. EU all perils accounts for 15 to 20% of the expected global losses with the rest of the risk distributed across U.S earthquakes, global floods, windstorms, brushfire, hailstorms, and Japanese earthquake, among others.

This information leads to various questions including: Is US$65 billion becoming the new ‘mean’, replacing average annual catastrophe losses in previous years of US$40 to 45 billion and below US$10 billion in the years before Andrew? Was the insurance market wrong in its assumptions regarding risk before Andrew and/or did the risk increase by factors of five to six since 1992 – far beyond exposure inflation and growth? Is this difference included in our models? Were we not again surprised by extensive losses in 2001, 2004, 2005, and 2011?

Rate adequacy is a key industry issue and depends largely on territory and peril. Among the worst recent historical losses are the 2011 Thai floods that reached 10% of Thailand’s GDP with insured losses almost 15x the country’s P&C premium. Although these numbers might be misleading as some of the US$15 to US$20 billion losses were absorbed by the global market, the floods wiped out Thai P&C premium for years to come. The next highest historical catastrophe loss/premium ratios were recorded after the 1997 Poland floods. Those losses reached multiples of 2 to 3 times the P&C premium in Poland. In terms of worldwide premium, global reinsurance catastrophe premium is estimated near US$20 billion and insurance catastrophe premium, although not easy to estimate, is projected at 5 to 7% of global P&C premium (i.e. US$80 to US$100 billion worldwide). Given historical losses, are rates adequate for both expected loss as well as unexpected loss? How sure are we?

In terms of perils, flood has become a prime focus triggered by a recent concentration of flood events. Early estimates of the 2011 Tohoku Japan property earthquake loss overstated tsunami losses following high flood-prone fatality estimates (final shake/flood losses at 80/20). In 2012, Windstorm Sandy merged with an extra-tropical system creating a supersize, fast moving storm which caused above expected storm surge losses. In Australia, flood losses have become more prevalent during the last several years as insurers have started to include flood in their insurance policies. However, the global focus on flood might well change with the next large, other than flood, loss event.

Uninsurability is also an issue. The Japanese insurance market considers earthquake risk too hefty and ‘uninsurable’ and therefore restricts commercial cover largely to small first-loss limits (US$100 billion insured vs. trillions of potential economic damage). Losses such as those from contingent business interruption policies have long been treated as incalculable and unmanageable, therefore uninsurable. Indirect losses such as supply chain losses have also played into large loss experience. These indirect losses make up 50% of the economic damage but had been negligible for the insurance losses. Indirect losses played a role in the 2011 Japanese earthquake losses and made the Thai floods the largest insured flood loss in global history. Indirect losses have thus found their way into the global insurance industry.

As a result of the above-mentioned factors, model vendors reacted by changing their models to: (1) match perceived increases in risk; and (2) cover an increasing amount of high-resolution exposure information. This created a market that followed apparent loss trends as well as the notion that higher resolution, more perils and all-inclusive modeling is better. Bottom-up, model-driven risk taking was deemed more capable than top-down strategic portfolio measures. Models were criticised for being increasingly misleading because underwriters and risk managers had misused them as all-inclusive risk tools rather than risk proxies for specific perils.

Providing shared platforms for trading complex and structured catastrophe risk has been one of the major advantages of vendor risk models, but recently, new ideas, changing regulation, and a new breed of scientists have been entering the market. The motto is no longer ‘more is better’ but rather that understanding risk is core, and that risk measures should be kept in-house as far as realistically possible. These scientists and risk managers are questioning model skill and stress testing model assumptions. They follow ‘Occam’s Razor’ by shaving off variables and eliminating higher resolution that might add complexity rather than improve accuracy (all variables add uncertainty, only those variables that add significant insight decrease rather than increase model risk). Indicator models and indirect loss models are entering the market along with a suite of new tools, model and platform providers. Model vendors are following the new trend and opening their platforms for more versatile risk users.

Forecasting algorithms are entering the market. Hurricane activity has been found to cluster in multi-year ‘regimes’ with higher energy release levels in the late 1990’s and early 2000’s alternating with lower activity rates in the 1970’s and 1980’s and higher rates in the 1950’s and 1960’s. This opposes earlier beliefs that hurricane hazard is following a steady upward trend. In fact, hurricane activity has been flat if not slightly down over the last 100 years. Hurricane Andrew in 1992 might have been an early anticipation of the more active early 2000 years or the extreme outlier in an otherwise inactive hurricane period ending in the mid 1990’s. Longer-term activity of F2 and larger tornadoes has decreased over the last several decades, although more recent years have shown increased shorter-term activity and tornado losses. Tornado/hail risk seems to be clustered within certain seasons and flood events or droughts seem to be due to atmospheric blocking persisting over weeks to months. Kagan and Jackson have found evidence that earthquakes cluster in time at least regionally, rather than being periodic or random. Intra-year clustering is obvious for European winter storms with a small number of active seasons comprising the majority of the longer-term risk. Although debated heavily among scientists, most recent apparent change in weather activities – except for temperature– can be explained by variability rather than trends meaning that constant upward trends in hazard are not proven for most perils worldwide.

Hurricane risk indeed increased after Andrew but higher hazard levels might not last forever. Is the hurricane risk on a downward trend? What drives the recent record low in land falling Florida events? These questions are heavily debated amongst scientists. With expected losses changing several-fold from one risk regime to the next, rewards for superior forecasting skills could be staggering. The likelihood of clash and risk correlations has been increasing. Among other things, clash risk is up because insurance penetration along with foreign direct investment is growing in emerging markets.
This results in a higher number of visible insured events. Events such as the Thai floods are examples of this trend. Variability is, however, deemed high across years largely influenced by the above-mentioned changing regimes and hazard activities across basins and perils. This might well result in significantly below and above average loss years in the near future.

How much of the variability mentioned above can be learned and how much is random? The answer seems to be anywhere from 30-90% can be learned, depending on perils and basins. What we know for certain is that the long-term average might not be relevant for the coming year(s) and models based on long-term average alone offer minimal accuracy. Bringing in resources that will help address increased variability, regimes, short-term forecasting and clash are a new task for Chief Risk Officers, risk managers and underwriters. We dare say that catastrophe risk taking will be profitable long-term for those leading (re)insurers who endorse new science and forecasting skill, whilst managing portfolios with versatile top-down portfolio management strategies, considering both mainstream and non-mainstream risk measures and catastrophe products.


This article first appeared in ‘Insurance-Linked Securities For Institutional Investors 2013’ published by Clear Path Analysis. For more information, please visit

www.clearpathanalysis.com