Risk Modelling to Risk Management

May 31, 2013 | Last updated on October 1, 2024
5 min read
Karen Clark, President and CEO, Karen Clark & Company
Karen Clark, President and CEO, Karen Clark & Company

It has been a little over 20 years since Hurricane Andrew, with its US$15.5 billion in losses, ushered in the industry-wide adoption of catastrophe models, forever changing the landscape of the property and casualty industry.

For two decades, catastrophe models have been the primary tools used by insurers and reinsurers to assess and manage catastrophe risk. Catastrophe modelling has become synonymous with catastrophe risk management, but companies require a lot more information than probable maximum losses (PMLs) and exceedance probability curves to effectively manage risk.

Being black boxes, however, vendor models have limitations, notably depriving users of insight into the true drivers of their loss estimates.

Model loss estimates are volatile because many model assumptions are based on little or no data. As more complexity is added to existing models, the likelihood of major mistakes and human error increases significantly. And the one-size-fits-all approach precludes customization for specific books of business.

Despite awareness of these shortcomings, and in the absence of alternatives, the industry has been relying too heavily on the vendor models. Recent events and model updates have not provided encouragement that accuracy is improving and uncertainty around the loss estimates is much wider than many previously understood.

Because catastrophe losses now dominate many of the property lines and are continuing to grow, the market is looking for additional tools. Insurance and reinsurance executives and board members need more insight into companies’ global exposures and risk concentrations. Rating agencies and regulators have growing expectations with respect to how well companies understand and can explain their catastrophe loss potential.

A NEW APPROACH

The model-only approach is not enough for effective risk management. While the models produce a lot of numbers, they do not necessarily provide insight into the risk. What is needed is a more advanced level of catastrophe risk understanding, as well as more intuitive and credible information for important risk management decisions.

Insurers and reinsurers need to be able to build their own proprietary views of risk. In the past, this has been done by licensing one or multiple models and then making adjustments to the model output based on underwriter judgment and other expertise.

But model-blending is an inefficient and ineffective way to build a proprietary view of risk. Because the models are black boxes, companies spend enormous amounts of time and resources trying to infer what is going on inside the models by looking at what comes out. Making blended frequency and severity assumptions is not feasible using existing models, so companies typically resort to simplistic weighting formulas.

Newer tools and risk management platforms are better suited to building a proprietary view of risk. Open platforms provide flexibility and enable the major assumptions to be properly peer-reviewed and customized. In addition, these new approaches offer transparent, consistent risk metrics for measuring and managing risk over time.

Delivering consistency

Volatility in model-generated loss estimates can be highly disruptive to business strategies. Many companies have developed sophisticated marginal pricing and portfolio management strategies that are highly dependent on the model output.

Much of the model volatility is caused by changing assumptions, sometimes in the absence of scientific data in most peril regions. For example, there have been only seven hurricanes to impact the Northeastern United States since 1900, and the last major storm was the Great New England Hurricane of 1938. Because of the paucity of data, scientists cannot pinpoint the probability of a major storm in this region with any degree of accuracy. For other perils, such as earthquake, there is typically even less data.

Since there will never be answers, a consistent set of scenarios representing the likely probabilities of events of different magnitudes in each peril region can be developed. These events can then be floated over portfolios of exposures to estimate the resulting losses. This is the flip side of the model output – the probabilities are based on the hazard rather than the loss.

There are several advantages to this new approach. While providing probability information, it also clearly identifies exposure concentrations and offers intuitive information for decision-makers, including boards and CEOs. The scenarios stay the same from year to year, providing consistent risk metrics for measuring and monitoring risk over time. A stable set of scenarios gives an operational advantage because the event losses can be drilled down to individual policies for marginal impact analyses, pricing and portfolio management.

Transparency

Traditional vendor models are opaque, meaning users cannot see the assumptions underlying the model outputs. They do not know what mistakes or unique biases might be informing the model calculations. Users can never be certain what drove the significant change in their PMLs with the most recent model update.

Newer platforms for catastrophe risk management help address the transparency issue. All components of these platforms, including event intensity footprints and damage functions, are visible to the user.

With their truly open architecture, these tools allow insurers and reinsurers to utilize the knowledge of scientific organizations around the world. And the open environment means important assumptions can be properly peer-reviewed, thereby helping companies better comply with regulatory guidelines and directives.

Flexibility

Open platforms also afford greater flexibility because they allow companies to customize key risk management components, such as vulnerability curves, to a specific book of business based on the internal expertise of the user and other external experts. For example, companies with sufficient loss experience can conduct their own detailed claims analyses to fine tune damage functions to more accurately reflect their unique portfolio. And users can “mix and match,” using event sets from one scientific organization and damage functions from another for a particular peril region.

A NEW PERSPECTIVE

Catastrophe models provided by a small number of vendors have been the dominant tools for making risk management decisions over the past 20 years. However, newer technology offers insurers and reinsurers more insight into the risk and more control over their risk management strategies, specifically by providing more consistent and transparent risk metrics in an open platform environment. Open platforms can be customized and enable companies to build proprietary views of risk more scientifically and efficiently than model-blending or adjusting model output.

These new tools can be used by companies to build their pricing, underwriting and portfolio management

systems around platforms that are informed by, but not based on, existing models. A more complete toolkit enables companies to move from simply modelling the risk to better understanding and managing the risk.