Robert Muir-Wood: Saving lives and money – the second wave of Catastrophe modelling

[php]echo(single_post_title());[/php]

[php]_e(‘Published on’, ‘gonzo’);[/php] [php]the_time(‘F jS, Y’);[/php] | [php]_e(‘by’, ‘gonzo’);[/php] [php]the_author();[/php]

How many people die each year from earthquakes in Australia or in New Zealand. How many die each year on average in the USA, in India or in China?

The answer? – We don’t know!  What we do know is that historical experience is an imperfect guide – for two reasons.  First we know that the population of disaster casualties is very ‘fat tailed’. For the Century before  2010, no-one had died from an earthquake in Haiti. Then on the afternoon of January 12th 2010, an estimated quarter of a million people died. A fat tailed distribution has  the characteristic that the mean, or long term average, will be many times higher than the ‘mode’ – the number that half the samples will be above and half below. In Japan in five decades out of the last Century the number killed in earthquakes has been less than 100. However the mean number killed in earthquakes each decade over the past 100 years has been more than 18,000. Those who work on natural disasters are often calling for the collection of ‘better data’ on deaths and injuries. Perhaps this is principally frustration around the fundamental challenge of working with an extremely fat tailed distribution –a distribution that is so poorly sampled we don’t even know what is its actual shape!  In New Zealand the average number who had died in earthquakes through the 20th Century was less than 3 per year. Then in  2011 185 were killed in Christchurch.

Normally when we are confronted with a situation where we need lots of data in order to understand the shape of the underlying distribution, we would simply collect more observations. We would likely need thousands of years of data before we were confident we had mapped the distribution. Not only is that length of historical data unavailable to us, but the system itself continues to change through time. The population has been rising and becoming increasingly urbanized. The buildings have been changing along with their inherent susceptibility to collapsing in earthquakes. Therefore, even using fifty years of data is likely to be biased. Suppose the building stock is changing at 4% a year – with 2% new buildings on vacant sites and 2% from demolishing and rebuilding older buildings. How much overlap is there of the current building stock with the buildings that existed 50 years ago? If we had started off with 100 buildings in 1964 by 2013 we would have 264 buildings of which only 37 – between 1 in 7 and 1 in 8 remain the same as those in 1964. Therefore, whether our buildings have got safer or more dangerous over this period, their behavior would have significantly changed.

The insurance industry has an identical problem around pricing the risks of catastrophe insurance. Catastrophe losses are rare, and form part of a fat tailed distribution. You cannot find the fair price for catastrophe insurance simply by waiting to see how the losses turn up – first because you would have to wait for thousands of years to hope to be converging on a mean – and second because the building stock has been changing through time. We have an identical problem to the conundrum of measuring disaster casualties.

The only solution to this problem is to build a computer model, and simulate the full population of all possible catastrophe events along with their respective probabilities. For each event we generate the damage and the insurance loss. Then across all the events we can calculate the average annualized loss –  the amount we would need to set aside each year to pay for all our future losses. This is the magic number that insurers need so as to set a fair insurance rate. The idea of developing a catastrophe model for insurers emerged about twenty five years ago, and quickly became so useful to insurers that it spawned large private catastrophe modeling companies to build and maintain the models. Now these same catastrophe models are being employed for measuring disaster casualties. The model can finally give us an answer to how many people can be expected to be killed each year in earthquakes. The casualty catastrophe models can also be used to help set targets for how disaster casualties can be reduced over five or ten years, and then be employed to identify the specific actions that will achieve the greatest reduction in loss of life. Which are the most dangerous schools for example? However we cannot expect to use actual data on fatalities to tell us how much progress we are making in saving lives in earthquakes, because the catastrophes are too rare and too erratic. We can only measure how well we are doing with the model.

Robert Muir-Wood, Risk Management Solutions (RMS).[subscribe2]

Tags: , ,




Comments are closed.

Back to Top ↑