The first diamond open-access peer-reviewed journal of catastrophe research

Insights into Industrial Catastrophe Risk Climate Peril Modelling, and Cross-Sector Opportunities for Progress

Volume 02, article 04
July 31, 2024
Author(s): Jayanta Guin, Suz Tolwinski-Ward, Charles Jackson, Boyko Dodov, and Roger Grenier
Tags:
DOI: 10.63024/v1nc-swnf
A close-up of a worker inspecting data on a screen.

Authors Jayanta Guin, Suz Tolwinski-Ward, Charles Jackson, Boyko Dodov, and Roger Grenier of Verisk Extreme Event Solutions, Boston, MA introduce the catastrophe risk modelling industry, describe the history and latest developments in state-of-the-art cat models within their firm, and discuss how further collaboration across sectors could contribute to progress in the estimation of climate-driven risk.

Introduction

It is gratifying to see the recent explosion of interest in (re)insurance risk modelling within scientific communities, especially those which are focused on atmospheric hazards outside of our industry. Increased participation from a range of perspectives offers the possibility of accelerated advancement of the field; this is especially important in the context of a rapidly changing climate, and the social imperative to improve humanity’s resiliency to catastrophic weather extremes.

There is a wealth of expertise within our mature industry about making science usable for risk estimation and management. In fact, catastrophe modelling has developed over time to fulfil this very need. We offer newcomers to the field of atmospheric perils risk modelling our perspective on the state of the field and areas of need within the industry that could benefit from external research. We focus this piece on the interchange of emerging science and hazard model component development, and do not address other critical components of risk modelling, like vulnerability assessment, or actuarial and financial modelling.

Getting to know the industry and our interest in climate science

Verisk Extreme Event Solutions is a catastrophe risk model developer that was founded in 1987 as Applied Insurance Research; it was the first company engaged in building commercial models for this purpose. The end-to-end “cat models” we develop are delivered as software solutions that allow users in the (re)insurance industry to estimate risk from hazards to large portfolios of built structures or individual risks. Users of our models input data detailing their exposures – that is, the properties they are interested in protecting against risk.

These portfolios typically consist of single risks to many millions of properties, each with geo-coordinates, an estimated value, and potentially many construction details and insurance policy conditions. The software outputs inherently probabilistic estimates of losses aggregated across the portfolio due to activity of a given natural peril to be analysed, expressed in terms of an exceedance probability curve, or loss as a function of return period. In this manner, the software functions as a Rosetta stone for our clients, translating science, climatology, engineering knowledge and insurance policy terms into estimates of probability distributions of monetary loss from extreme natural hazards of many types. These estimates allow users to design products and strategies to counter the likelihood of losses, thereby providing an important cornerstone in a fundamental tool for societal resilience.

In between the user input and the software output sits the chassis of models the Verisk Extreme Event Solutions Research and Modeling group spends time building, supporting and improving. While we have developed models for a wide variety of risks, in this paper we focus on the subset that address natural hazard perils impacted by climate. These include the risk that tropical cyclones, floods, extratropical cyclones, wildfires, and severe thunderstorms pose to the built environment, as well as weather hazard risk to agriculture.

The basis for the probabilistic exceedance probabilities provided by our solutions rests on the foundation of large Monte Carlo ensembles of physically-plausible simulations of a year’s worth of each modelled natural hazard. The annual unit of simulated activity within Verisk’s models reflects the annual renewal policy of our clients’ (re)insurance products. We note that the goal of the tool is not to forecast insurer losses over the course of any given year, but to fill out the probability space of losses to provide estimates of expected and tail values of risk for our users’ strategic planning.

Each peril-region combination where Verisk develops and supports such a catastrophe risk model requires a large multidisciplinary effort within our Research and Modeling team. An appropriate model requires not only scientific understanding of the atmospheric phenomenon, but also understanding of the local building stock and its vulnerabilities, plus analysis and interpretation of available claims data from client partners. Local economics, insurance take-up rates and policy terms, and any local regulations on modelling or insurance policies also come into play. Understanding the limitations of available observations and robust statistical characterisation of all these facets is key.

Solid model design also requires consideration of usability within a software package. Practical run times require choices on the size of underlying data and require algorithms that meet demand for risk estimates at building-level resolution on portfolios at continental scales. Effective solutions to such a multi-faceted problem therefore require a vast range of expertise. The level of sophistication of our staff may surprise some newcomers to the field. At the time of writing, the Research and Modeling Department at Verisk Extreme Event Solutions is composed of 174 people, 90% (55%) of whom hold advanced degrees (and PhDs, respectively) across the physical, mathematical, and engineering sciences. The institutional and applied scientific knowledge developed over time in the context of an evolving modelling landscape is a critical element in the value we deliver to our clients.

One commonly-repeated misconception of newcomers to the field is that catastrophe models are based solely on physics-free interpretation of historical data. Such design would obviously be quite limiting to the ability to represent hazards — especially in a changing climate. In fact, one fundamental value proposition of probabilistic cat modelling has always been to cover the state space of physically plausible risk outcomes more fully than can be done through use of statistics on historical data alone. One core aspect of our expertise is therefore in blending physics with statistical modelling. As computing power has advanced and the data availability has exploded over time, we have performed this integration of mechanistic and data-driven sources of information with increasing levels of formality.

Since the late 1990s, numerical weather prediction (NWP) models have become a well-used tool for realistic simulation of a spectrum of activity for many of our models, with its first application in modelling extra-tropical cyclones over Europe. The wind fields of extratropical cyclones are much less structured than those of hurricanes, so the goal of capturing the messy and complex footprints accompanying this peril made it a natural first candidate for the use of physics-based modelling in lieu of cheaper parametric wind field formulations.

In the first instance, we used a rudimentary physics-based model, based on 2-dimensional shallow water equations, due to computational limitations at the time. In 2001, the evolution of accessible compute power made it feasible for us to switch the early 2-D physical ETC model basis to simulations using the Fifth-Generation Penn State/NCAR Mesoscale Model (MM5), a precursor model of the Weather Research and Forecasting model used in operational forecasts today. More recently, for our European and North American flood model domains, we have integrated 10,000 years of WRF runs nested within GCM runs of the same length to provide boundary conditions to produce stochastic simulations of precipitation. Ensemble runs of historical hurricanes using the WRF model form the calibration set for the statistical modelling of space-time hurricane precipitation fields in our US hurricane model. Our current work in development moves beyond the use of NWP and toward AI-assisted bias correction and downscaling of general circulation models.

Another misapprehension about contemporary catastrophe risk modelling at Verisk Extreme Event Solutions relates to the angle from which we are interested in climate change. Because our models are meant to estimate current levels of risk, it is critical that our simulations of hazard account for changes in climate that have already taken place. We refer to this task as modelling the “Near-Present Climate,” and consider it a core aspect of building reliable and credible cat models.

Representing the current climatology of any hazard requires us to focus on understanding and evaluating sources of nonstationarity in the available historical data. Physical understanding of the climate-hazard system must be used in tandem with quantification of internal process variability to ascertain which apparent changes in hazard distribution are robust. Because of this necessity, we have climate scientists embedded on the model development teams of every atmospheric peril. These teams are guided by our Director of Atmospheric Perils Modelling, a seasoned climate scientist with three decades of research experience interpreting comparisons between uncertain climate models and observations. The importance of physics-based expertise is also reflected in several active collaborations with external experts, including a Climate Advisory Panel (initially formed in 2022) whose members provide us with guidance and valuable soundboarding on physical aspects of climate change pertaining to our modelled perils. Knowledge of uncertainties and changes in data quality and coverage over space and time play a role too. A separate Climate Statistics team, composed of members with statistical and climate science expertise, is dedicated to working with the model development teams to analyse and quantify both the internal variability and trends in hazards.

The specific approach we use to create models reflecting the current distribution of hazard frequency and intensity varies depending on the particulars of the peril, region, and available datasets. Careful consideration of both physical and statistical perspectives has been a mainstay over time though. One example is our well-known “Warm Sea Surface Temperature” (WSST) US hurricane event set, first released by Verisk in 2010. This alternative to a straight calibration to the entire length of this historical record was based on peer-reviewed analysis of the statistical significance of positive association between increases in landfall frequency and basinwide SSTs, and supported by well-established thermodynamic arguments. This view of risk has been available to support our (re)insurance clients with their business decisions even while scientific debates over the cause of the North Atlantic warmth were ongoing.

With respect to estimates of future risk, our research staff is keenly aware of the uncertainties in climate change modelling and science at small spatial scales and time horizons beyond the next five to 10 years. Given the current state of science, we do not claim to be able to produce unbiased high-resolution future risk estimates, nor do we wish to mislead clients with false precision in order to capitalise on modelling that cannot be validated. In our more limited projects relating to future risk, we tend to aid our clients with the provision of scenario-based estimates that may be used for narrative evaluations of future risk through stress-testing for regulatory reporting and strategic planning use-cases. In these contexts, we also work to educate our clients on the scientific uncertainties and fundamental limits of both current scientific understanding, and application of our risk modelling frameworks to future-projected climates.

Understanding the industry modus operandi

Differences in approaches to seemingly closely related problems in industry and academia stem from a fundamental difference in the ultimate goal of work being done. Research and development activities in industrial cat modelling are entirely driven by the mandate to provide models that support action. This goal stands in contrast to goals within academia and government labs, which may create models for exploratory and fundamental science. Where uncertainty or limitations exist in academic studies, these are stated and described, often with a recommendation to “interpret results with caution” in light of the unknowns. Because the very mandate of cat model developers is to provide actionable, quantitative estimates of risk under uncertainty, implementation choices must be made in spite of imperfect knowledge. Uncertainties are transparently stated, quantified, and represented within the model where possible. In industry, once a project is complete, we are also beholden to provide continuous support for the output, tracking the impact of the limitations of results, and keeping an eye on the relevance of new information. While model updates are sometimes due to advances in science and modelling techniques, changes are just as often undertaken for other reasons. These may include regulatory requirements within the insurance industry, changes in technology pertaining to research or software compute power, updates to datasets relevant to hazard, vulnerability, or exposure, or the availability of new claims data. Either way, as catastrophe modellers, model updates also mean helping our users bridge differences in versions when the next generation of a solution is made available long after the underlying research has concluded.

We (and our users) continuously interrogate our models for robustness as new extreme events occur. Sometimes our models hold up to that scrutiny, and other times we must react quickly to incorporate new learnings. One example includes our reaction to the 1990 and 1999 European extratropical cyclone seasons, in which the region was impacted by multiple clustered storms. Following these events, we updated our models quickly to more explicitly represent clustering behaviour. More recently, we added a specific tropical cyclone flooding component to our US hurricane model after the flooding of Hurricane Harvey in 2017, and Florence in 2018.

The imperative to provide this continuity and support of our output contrasts with the remit of academics, where researchers have complete freedom to move from one problem to the next. The means by which models become vetted and gain credibility in industry also contrasts with standard procedures for review within fundamental science. Perhaps the most obvious difference is the lack of reliance on peer-reviewed publication. At Verisk, we frequently create models which we feel advance the frontier of applied science, but the benefit of building credibility and eminence within a larger intellectual circle through peer-reviewed publication must be weighed against the risks of giving away intellectual property and competitive market strategy. When publication is deemed advantageous, the timing of submission relative to model release becomes an important consideration.

Our limited publication record, however, is not intended to make our modelling opaque. Although we do not make our modelling results reproducible to others as academic researchers strive to do, transparency – not to anyone, but for licensing model users, and without giving away intellectual property – is a priority. To this end, we provide a technical document for each model that is typically hundreds of pages long, and provide hazard module output in event databases that is part of the software to facilitate client model evaluation. Answering specific client questions covering a range of sophistication is a routine and time-intensive part of our ongoing model support activities.

Perhaps most interesting for outsiders to consider is the long-term “peer review” scrutiny the implementation of our ideas come under simply through regular use. Legions of users are constantly slicing and dicing the outputs of our models in new and unanticipated ways, with an endless variety of client property portfolios considered for underwriting each renewal season. Many clients also hire sophisticated staff with PhDs in relevant physical and engineering disciplines, who test the scientific basis against their own independent data sets and expertise. In this manner, credibility is built over time by successfully using models as currency for risk transfer within the marketplace by many parties. This scrutiny is arguably more intense than a typical peer-review for journal publication.

On the technical side, one mandate for industry-grade catastrophe risk models is that the probabilistic estimates they provide must be unbiased across scales ranging from local to continental. This charge is not a requirement for modelling work for the purpose of fundamental science. In fundamental studies, understanding can be derived not only in spite of model biases, but sometimes even through careful consideration and study of where and how they arise. An obvious example is the set of biases in space, time, and frequency found within climate models- an inescapable artefact of insufficient resolution of important processes. Nevertheless, these models represent the state-of-the-art tool for advancing humankind’s understanding of the earth-climate system.

Luckily, this well-known fact doesn’t grind scientific progress to a halt; climate change can still be studied in terms of percentage changes even from an imperfect baseline. In risk estimation, however, such known biases are unacceptable. It is also critical that industrial cat models explicitly capture the full range of variability of underlying processes, as proper risk management must account for years of loss above and below the expected value. Again we contrast this dictum with models used for academic climate science, where it is often too expensive to run large ensembles to estimate a full distribution of outcomes, and studies often focus on estimating changes in mean quantities.

Cat models must also converge in the statistical sense to reasonable values at an arbitrary combination of spatial locations. There is no ability to calibrate the model risk estimates against the historical data “on the fly” at any possible set of locations our clients might input into our tools at run time. This stands in contrast to some academic hazard studies, which can be specified and then calibrated per spatial use-case of interest. The need for statistical convergence in the tails and at such a wide range of spatial scales requires enormous sample sizes, and thus dictates a focus on efficiency and ability to scale our solutions to make them feasible and accessible to clients at a reasonable computational cost. Generally, the number of simulated years needed to attain convergence (and thus a stable risk estimate) varies by return period, region, peril, and spatial scale being considered. In submission of our US Hurricane Model to the Florida Commission on Hurricane Loss Projection Methodology, we are required to show that loss estimates converge at the county-level for Nassau county in Florida, within the 50,000 modelled years of submitted modelled activity. Another difficult example we have considered in the course of practical model building is providing the flood depth at 10-metre resolution and 1,000-year recurrence, across an entire continent, or even globally.

We do not highlight any of these differences to diminish the value of modelling for fundamental science – without which, of course, catastrophe models would not exist. Still, we feel the current state of existing cat modelling is underappreciated by many in external research communities simply because the modes of operation are foreign to observers oriented toward fundamental exploratory science.

The potential to drive progress together across sectors

Progress in catastrophe risk modelling depends critically on progress in science at large. Many aspects of the significant evolution of state-of-the-art cat models over the past several decades reflect broad improvements in the understanding of earth system physics, process representation in climate and numerical weather prediction modelling, and the resolution of reanalysis products, for example.

The transfer of this knowledge occurs as builders of catastrophe risk models attend conferences, read the literature, and generally keep up with their fields of expertise as a matter of best practice to keep our models up-to-date. This first important – though unidirectional – mode of engagement with external research emphasises the importance of scientific inquiry directed for its own sake, and will continue regardless of external efforts to tailor research to the needs of the (re)insurance sector or catastrophe risk modelling.

The recent growing interest among external research communities to “flip the script” and learn about the interesting applied problems in (re)insurance offers the possibility for multi-directional engagement between sectors that should benefit all. Topics that challenge developers of industry-grade catastrophe risk models should be a natural source of interesting and high-impact applied problems for academic and government research communities. Meanwhile, having external scientists who understand the outstanding issues and vocabulary of the risk modelling industry should result over time in more usable results from academic science for future improvements in catastrophe risk model development.

Moving forward, we see convergence in some types of research topics that are both of academic and practical interest to the catastrophe modelling industry. The first opportunity is for the research community to specifically target the study of quantities that are more relevant to the frequency, intensity, and location of specific perils in a risk modelling context. For instance, the tracking of extra-tropical cyclones in the literature often focuses on quantities such as geopotential height or vorticity anomalies above the boundary layer and into the mid- to upper-troposphere. However, not all structures above the boundary layer have the same level of impact on surface winds, which comprise the actual risk-relevant hazard.

Similarly, studies of tropical cyclone statistics typically focus on basin-wide activity, while landfalling activity is the directly relevant quantity. Risk estimation also involves representing the full distribution of a hazard. Thus, study of all features of a distribution, rather than focus on the mean and its changes, are of interest. For example, the variability of major hurricane landfall counts along the US coastline experienced a huge increase, with back-to-back record-shattering counts in 2004 and 2005, followed by nine years without any. Can this increase in the variance be explained in terms of the mechanistic controls on landfall and intensification, and do we expect this recent increase in the interannual variability of risk to persist?

The design of more efficient climate models for simulation of extreme event statistics presents another opportunity. A useful model for risk should include the processes and interactions needed for simulating extreme event characteristics, without the computational burden of nonessential details. We have found that machine learning algorithms can compensate for the deficiencies of an older version of the Community Atmosphere Model (CAM4) to get a result that is superior (for cat modelling purposes) than the current state-of-the-art 0.25 degree version at 1% of the computational cost. Such blending of modelling paradigms is still in its infancy. There is a lot more that can be explored and accomplished along these lines.

Another area of potential mutual interest is the identification and interpretation of robust and predictable local trends in weather data. Within historical datasets, significant trend identification at scales relevant to risk is a challenge because of spatiotemporal heterogeneities in data quality and coverage. Decisions about whether or not to explicitly account for trends that have only borderline statistical support within risk models need bolstering by physical interpretation of changes via comparison to earth system model simulations.

In this context, the challenge becomes the identification of high-resolution “fingerprints” shaped by regional and fine scale processes where models may have less skill. Can the success the community has had in the detection and attribution of climate change signals at global and continental scales be skillfully extended to very high resolution? It is also equally relevant to be able to interpret and quantify trends resulting from multi-year to decadal modes of variability. In this context, an interesting research question focused on the stability of relationships between large-scale natural variability and local hazard follows: Which “teleconnected” impacts on hazard will remain stable in the future under a warmer climate state?

Finally, there are a few lower-hanging fruits the science community could more routinely provide to make research more usable. Accessible, globally-unified, homogeneous and continuously maintained data sets of important underlying quantities are critically important, and even more useful when accompanied with uncertainty estimates. Review papers on the state of science as far as climate change impacts on specific hazards are also extremely valuable. An analog to the Knutson et al 2019 and 2020 reviews of literature on climate change impacts on tropical cyclone activity would be very welcome for floods, extratropical storms, severe thunderstorms, and wildfires. Provision of the ranges and uncertainties of estimates for the same quantity across such meta-studies, as well as a flavour for the level of consensus is incredibly valuable. Clear and emphatic communication of emerging scientific results with high impact on the risk landscape by scientists in venues where risk managers and modellers can pick up the messaging is highly appreciated. A review paper on the limits of scientifically-justified uses of climate-projected information for different perils would also be hugely useful for us to point to as our clients increasingly ask for help in gauging their future risks. Such work would additionally help potential users assess credibility of products on the market purporting to give localised, unbiased estimates of future climate.

Conclusion

The management of evolving risk due to climate change is a technically and scientifically challenging problem. Combining experience and perspectives across sectors should aid in deriving the best science and solutions that can contribute to a more resilient future. The Verisk Extreme Event Solutions research staff looks forward to increased engagement with basic and applied researchers of all stripes, and the learning and cross-pollination that can only improve our models and the outlook for society going forward.

Article Citation Details

Jayanta Guin, Suz Tolwinski-Ward, Charles Jackson, Boyko Dodov, and Roger Grenier, Insights into Industrial Catastrophe Risk Climate Peril Modelling, and Cross-Sector Opportunities for Progress, Journal of Catastrophe Risk and Resilience, 2024. 10.63024/v1nc-swnf

Total
0
Shares
Related Posts
Read More

Leveraging insurance for decarbonization

Carolyn Kousky, Environmental Defense Fund, and Joseph W. Lockwood, Princeton University, explore the different levers currently being used by global insurers to help reduce greenhouse gas emissions, and how underwriting and the claims process could play a significant role in the transition to a low-carbon economy.
Read More

Climate Change Physical Risk in Catastrophe Modelling

Jessica Turner highlights the challenges presented by incorporating climate change physical risk in catastrophe modelling, recommends some best practices, and discusses why improving our modelling can be beneficial, not only for the industry, but the wider economy.