Jessica Turner, Managing Director and Head of International Catastrophe Advisory at Guy Carpenter, highlights the challenges presented by incorporating climate change physical risk in catastrophe modelling, recommends some best practices, and discusses why improving our modelling can be beneficial, not only for the industry, but the wider economy.
As 2022 came to a close and the most difficult renewal in more than a decade was finishing up, it became clear that other issues – the cost-of-living crisis, war in Ukraine – had pushed climate change down the list of insurers’ key concerns. In the World Economic Forum’s (WEF) 2023 Global Risks Report (WEF, 2023), environmental concerns continued to dominate the long-term outlook, but in the near-term, social concerns took the top spot.
Climate change has long been acknowledged as a threat to the insurance industry, and yet, year after year insurance professionals place these thoughts aside; to them, it seems, the impacts of climate change are something that will happen in tomorrow's world, and not today’s. But the effects are already happening. When El Niño returns, raising global temperatures, an anomaly of 1.5°C – relative to pre-industrial temperatures – may be reached; this is an important psychological boundary, even if not technically the threshold referred to by the Paris Agreement (which is an average over multiple years). It remains to be seen if crossing that line will bring more urgency to the discussion.
Increased insurance losses have certainly been making the headlines, often accompanied by dramatic images of devastation. So-called secondary perils, such as wildfires, convective storms and flooding, have been particularly damaging. Swiss Re estimates insured losses in 2022 were USD$122 billion (Swiss Re, 2022), significantly above the 10-year average and putting the recent trend in catastrophe losses as increasing by 5-7% annually; this led to a hardening of the market and nervousness about the quality of different catastrophe models used to manage risk. Some secondary perils are known to be impacted by climate change more than others, raising the question as to what extent climate is the culprit for the increases. At the same time, the world has been warming and the built environment has been expanding, putting more exposure in the path of hazards.
Increased insurance losses have also been affected by covid-related supply chain issues, while cost inflation in materials and labour have driven up repair costs. In the United States, issues around ageing infrastructure and litigation have also compounded the problem. A recent internal analysis by Guy Carpenter based on some of our own climate change catastrophe model adjustment efforts and inference from scientific literature, such as those included in the IPCC report, estimates that in the near-term, climate change adds around 1% per annum to global average annual loss, with a very large range of uncertainty, and with some individual perils increasing at a higher rate.
Given the acknowledged threat of climate change, a number of modelling providers, both traditional and new entrants, have rushed to provide quantitative analytics to estimate the physical risk from climate change. Methodologies and quality of approaches are varied, but in general the estimates are based on credible science and find loss estimates considered manageable by the industry when annualised – even if these annual increases are rarely applied in practice by insurers; in a competitive market few want to be the first to move. Using these analytical tools, is climate change physical risk to insurance portfolios therefore a solved problem? Do we have the right knowledge, tools and skills, or are we missing something key?
Understanding your toolkit
Climate models (the best tools we have to understand how extreme weather might evolve in the future) and catastrophe models (the best tools we have to price and manage natural catastrophe (Nat Cat) insurance risk today), are constructed in fundamentally different ways. Climate models share many similarities to the numerical weather models used for weather forecasts in that they are deterministic and incorporate the mathematics of the atmosphere, as well as important interactions between components including the biosphere, cryosphere, anthropogenic gases and atmospheric chemistry. Unfortunately, they have certain limitations with regard to insurance risk, particularly in the way they represent extremes.
Although climate models are extremely sophisticated, they are run at relatively low horizontal and vertical resolution; this is due to the computational power needed to solve physical equations in a deterministic way over 80+ years. As a result, some aspects of the climate system, such as cloud microphysics or convection, cannot be explicitly represented. Newer climate models are increasing in resolution and improving in their representation of extremes; however, the strongest winds in tropical cyclones, hail and convective storms, and the most extreme precipitation, among other hazards, are generally under-represented. Nevertheless, scientists do use proxies and other techniques to estimate the impact of climate change on these phenomena and can produce statements such as: “For the global proportion of TCs that reach very intense levels, there is … a median projected change of +13%.” (Knutson et al. 2020) or “…the frequency of damaging convective weather events including lightning, hail and severe wind gusts will likely increase over Europe” (Radler et al., 2019). Conclusions such as these can – and are – being used to adjust catastrophe models to the future climate.
An important nuance when thinking about climate model output is their representation of what scientists call ‘internal variability.’ Strictly speaking, the term climate refers to patterns of temperature, rainfall and other physical variables averaged over a period of time. The average climate in a location is the consequence of things such as its proximity to the equator or the sea, or the level of greenhouse gases in the atmosphere. The value of those physical variables on any given day in that location stems from the climate signal plus internal variability, the causes of which are multiple. Internal variability can refer to phenomena as wide ranging as the El Niño-La Niña cycle or an unusually strong outbreak of summer thunderstorms. Climate models include internal variability, but generally users try to average out the phenomena to isolate the greenhouse gas impacts alone.
Catastrophe models may have some dynamical components, however they tend to rely more on statistical methods than dynamical ones, making many simplifications and approximations. This is so they can be run for tens of thousands of years at high resolution, creating large catalogues of stochastic events with a frequency and severity that captures the statistical distribution of extreme weather. These events range in frequency, from those which could happen once a year on average, to events that are so extreme they are only expected once every thousands of years.
These models complement and expand on existing historical records, creating large numbers of synthetic events that have not occurred but are physically plausible. Importantly, these models are not attempting to forecast what events will occur in the coming years, but they do provide a view on the distribution of potential outcomes and their likelihood. It is, however, important to note that the reality in any given year can be wildly different from the mean of the model’s distribution.
Known disadvantages of the hazard components of catastrophe models include their extrapolation from historical experience, which is necessarily limited by short observational records and their potential to be non-physical given their typical construction from statistical techniques. Unlike climate models, they are also attempting to be loss models, so additionally they have weaknesses related to unknowns in the vulnerability of the built environment, as well as difficulties relating to fast-moving changes in the economy such as inflation spikes.
Current practice by users wishing to account for climate change in catastrophe models is to adjust (commonly) the frequency and (occasionally) severity of the underlying baseline event set in the stochastic catalogue according to conclusions from climate model output. Using the example statements previously described from the climate change literature, one could increase the rate of Category 4 and 5 tropical cyclones by 13%, or increase the number of hail events per year in Europe by 5%. The advantage of this practice is that it is relatively easy to do operationally by either the model developers or users and can quickly translate scientific knowledge into results.
There are also clear disadvantages. New kinds of events that may be possible in a future climate, but that are not contained in the original event set, cannot be represented. Changes more complicated than simple rate changes, including a shift in storm size, a change in the distribution of hail sizes, or a slowing of the translational speed of hurricanes, also cannot be easily represented, if at all, using this technique. Finally – and this is a slightly separate point to come back to – changes in exposure characteristics, vulnerability or mitigation measures are not typically included in the future estimates.
Timing is everything
When considering the modelling of physical risk from climate change and its potential skill, the time horizon of interest is a key consideration. There are fundamental differences between the relevant use cases, modelling techniques and reliability in the near-term (i.e. the current climate and up to five to 10 years in the future), versus the long-term (i.e. 20+ years in the future). In the near-term, insurers want to feel confident that the models they use to estimate the current risk are suitable, well-calibrated, and not underestimating the risk. The impact of climate change over this time horizon will be incorporated into business decisions such as pricing levels, capital holdings and growth strategies.
Catastrophe models are constructed from, and calibrated to, historical observations, both in hazards and experienced losses. Not always, but usually, stationarity in the observations is assumed, particularly for physical data, meaning for example that an event occurring in 1980 is given the same weight as an event that occurred in 2007. In addition, the longest possible observational record is sought. This makes sense, since we are dealing with extremes, and they are necessarily observed infrequently. If, however, that peril has characteristics of non-stationarity, such as if climate change is acting rapidly to alter it, then the statistics used to build and calibrate the models will be incorrect.
Providers of catastrophe models that assume stationarity in the historical observations will often point to the difficulty in showing a trend in extreme events in the past. They are not wrong. To demonstrate that point Figure 1 shows precipitation trends in the ERA5-Land reanalysis for total annual precipitation and annual one-day maximum precipitation. In the total annual precipitation, a dampening of northern Europe and a drying of southern Europe, in line with theoretical expectations, can be clearly seen. When moving to extreme values, the annual one-day maximum precipitation pattern looks much more spatially incoherent. The trends at a local level are less reliable because we have too short a historical time series to see a clear signal beyond internal variability at every location, even though we have good reason to believe trends exist.
Figure 1: Trend in ERA5-Land total annual precipitation 1950-2021 (top panel) and trend in ERA5-Land one-day maximum precipitation 1950-2021 (bottom panel).
The assumption of stationarity in the historical observations for many perils is untenable. It is well-known that the shortness of the historical observational time series, the rapidity at which the world is warming, and the stochastic nature of extreme events means that a strong, clear, statistically significant signal at a location level would take a very long time to confirm. Insurance professionals are accustomed to making best estimates of the risk under conditions of uncertainty and there are good reasons to believe some perils are being impacted in such a way to make current catastrophe models incomplete.
A useful framework to evaluate when to be concerned about the impact of climate change on risk modelling is to examine the theory, observations and projections of a peril’s behaviour under climate change. First, consider if there is a good physical theory for how the peril will be impacted by climate change. Increased temperature in the absence of increased precipitation will increase the wildfire risk, all else being equal. Second, consider if there are any observations to support that change, keeping in mind the difficulty already established with historical trends. Attribution studies, where scientists examine the relative likelihood of real extreme events in pre-industrial and current climate models, are helpful here. Finally, consider what the climate models are projecting. The most obvious perils – although not the only ones – to highlight under this framework are flooding, both inland and coastal, and wildfire in Mediterranean-type climate regions.
Climate models
Over the longer-term, the uncertainty in both the future emissions and the climate models’ accurate representation of changes in extreme weather due to the changes in radiative forcing and feedback in the earth’s system is – to put it lightly – very large. However, there is still value in producing such numbers for evaluating potential future scenarios: to motivate governments and other stakeholders with the potential costs of climate change, or to test the benefit of various mitigation and adaptation measures.
One of the first questions confronting an analyst with regards to projecting future climate change impacts on insurance losses is which emissions scenario to use. In support of the Intergovernmental Panel on Climate Change’s 5th Assessment Report, projections of radiative forcing due to greenhouse gases were created – called representative concentration pathways (RCPs) – and these are commonly used by the community. The RCPs were not linked to energy systems modelling or assumptions about economic transformation or policy outcomes, which was seen to be a deficit. These points were addressed for the 6th Assessment Report and the most current projections are called the Shared Socioeconomic Pathways (SSPs). There is a sub-set of RCPs and SSPs which have the same radiative forcing projections, however the responses of the newest CMIP6 climate models to these forcings is different than the responses of the previous Coupled Model Intercomparison Project Phase 5 (CMIP5) models, with some of the CMIP6 models running hotter. Additionally, the Network for Greening the Financial System (NGFS), a group of central banks and supervisors, created their own future greenhouse gas emissions scenarios for use by financial system entities. Several insurance regulators have chosen to align with the NGFS scenarios.
What is not often appreciated is that in the next decade and a half there is very little difference between emissions scenarios, and furthermore the spread of climate model responses to an individual emission scenario – even in a variable as robustly modelled as temperature – is as large, or larger, than the differences between the model median response to the scenario until towards the end of the century, as shown in Figure 2. The phenomenon of model response spread around a single emission scenario being larger than the mean response across scenarios has also been noted for metrics more relevant to the insurance industry, such as tropical cyclone frequency and severity (Jewson 2021). At Guy Carpenter, we routinely see requests for climate change impact modelling for multiple emissions scenarios at five-year intervals from the current time until the end of the century. Such requests create false precision in the modelling, particularly when internal variability in the climate system is considered.
Figure 2: Spread of CMIP6 climate models’ (from 5th to 95th percentile) global mean temperature anomaly (relative to 1850-1900) for several commonly used SSP scenarios. The median of the spread is also shown in black. Created using IPCC 2021 data citable as: Fyfe et al. 2021
A second question confronting the analyst after deciding upon a scenario is: which model? Providers of climate change conditioned catastrophe models and associated products typically use either one climate model, for reasons such as user-friendliness or open-source licensing, or they use a multi-model mean. Using a multi-model mean has several key advantages: Firstly, it smooths out the internal weather-related variability of the climate models to hopefully represent the true climate signal; secondly, it dampens the impact of outliers where we may not trust individual model responses when they are extreme compared to others. Often, there is no physical basis to reject an extreme or outlier response, but it seems suspicious when a single member of a climate model ensemble is in large disagreement with the others.
We should acknowledge that climate models have a difficult time representing the extreme hazards in which the insurance industry is interested. They do not produce strong enough tropical cyclones, individual hail storms, or any inundation on the ground from a flood, and therefore we use statistical techniques and proxy variables to estimate the potential impact; as a consequence we could be significantly underestimating the risk. As an industry accustomed to dealing in Probable Maximum Losses (PMLs), we should not be neglecting potential pessimistic climate model outputs in favour of the model mean. Uncertainty ranges and worst-case scenarios should be considered. Being overconfident in models has had a long (and sometimes disastrous) past in the financial system, and we should be wary of replicating that mistake.
Building resilience
A common complaint when climate change physical risk modelling is performed is that it does not include mitigation and adaptation measures. It is hard to imagine that changes in the built environment will not occur, either through policy or private action, to make property and infrastructure resilient to natural disasters made worse through climate change. Our modelling would be better for including such changes; unfortunately, there are some challenges and considerations to take into account before this can be achieved.
For example, in the United Kingdom (where this article is being written from), several cities still use 19th Century Victorian drains for their urban sewers. While there will be locations where it is easier to make interventions to improve this system (and therefore improve the resilience of the city against flooding), there will also be locations where it is harder. Good examples of putting this theory into practice can be found elsewhere, such as in the city of Copenhagen, where parts of the city have been made to withstand the outcome of a 1,000-year rainfall event. Such action was instigated following the devastating cloudburst floods of 2011 and 2014 (Ministry of Environment Denmark, 2022). The appetite to fund this kind of large-scale work needed to address the climate risk is, however, unknown, and we should not overestimate the ease or potential pace of improvements.
Large-scale infrastructure projects, such as tidal barrages and flood walls, are an area we can expect to see some interventions; but what about resilience measures at the property level? For homeowners or businesses there needs to be a financial incentive to deploy such measures, trusted standards of manufacturing quality so they work as expected, and a skilled workforce to install the protections. For the modeller to incorporate such measures into a loss analysis they need to know where the measures are, either through a database or by self-declaration, the former being preferable. It must be known how they will impact the loss and a mechanism within the cat model is needed to apply those loss changes. This is by no means a small feat.
For all the challenges, a study on this topic would be hugely beneficial, but it should be based on real, reasonable assumptions on what mitigation and adaptation responses can be expected by government and private actors. A starting place could be examining what, historically, are the cost-benefit analyses used by governments to take action to reduce risk. Where are increased defences or other measures practical? What was the take-up of property-level resilience measures in areas impacted by floods or fire in the past? It is clear that increasing resilience will decrease the losses, but what is needed is better estimates of what is feasible and likely.
A recent paper in Nature Climate Change (Fiedler et al., 2021) highlighted the risk of unintended consequences and potential misuse of climate change model output in the financial system as pressure to quantify the risks intensifies. They warn that the output is being productised and used in ways that are beyond the capabilities of the models, potentially creating false assurance that the implications of climate change are well understood. A proliferation of data providers, both commercial and open-source, have appeared in recent years to help, particularly with regulatory and disclosure demands; however, the quality of these products varies enormously and should not be used blindly for any business decision.
Understanding is key
Catastrophe modellers are well-schooled in the limitations of their models. Every devastating catastrophic event seems to bring a surprise – or at least new learnings; even as our risk assessment tools have evolved in sophistication and skill over the last 20 years and more. How well the limitations of climate model derived analytics are being understood by the larger financial system or governments as catastrophe models and associated products begin to be employed in new ways is questionable. It goes without saying that great care is needed when interpreting any climate change model results.
It is of fundamental importance that the catastrophe risk community show leadership in developing best practices on the modelling and interpretation of physical risk from climate change over near-term and long-term time horizons. This article has highlighted some of the challenges, not least of which is the danger of false certainty and the neglect of potentially extremely adverse outcomes through the use of multi-model means. The catastrophe models we use today are probably deficient in assessing the true risk for some regions and perils, especially for those perils rapidly and strongly impacted by climate change. We should not be surprised to continue to see ‘model-misses’ from climate change-impacted perils.
Nevertheless, our industry is the best placed to interpret modelling results and make robust decisions in an environment of uncertainty. In my view, the most significant improvements in using catastrophe models for the quantification of extreme weather impacts from future climate change will come from building models specifically for this purpose. This means completely new and separate event sets with the capability to include more subtle changes – such as in storm size, translational speed, sea-level rise, or the spatial expansion of hazard into new areas – rather than simple rate changes to existing catalogues. Some demonstration of the spread of credible climate models’ response to increased greenhouse gases must be made, perhaps in the form of separate event sets for the low and high climate model projections, which has the added advantage of including the potential multiplier of internal variability.
Finally, a reasonable attempt should be made to quantify realistic responses to adaptation and mitigation. While creating such models will be demanding, and will require multidisciplinary effort, it will have the ultimate benefit, not just for the insurance sector but NGOs, governments, financial institutions and the wider economy.
Acknowledgements
The author would like to acknowledge her Guy Carpenter colleague Sam Phibbs for his assistance in creating Figure 1.
Guy Carpenter Disclaimer: The article provided is for general information only. The information contained herein is based on sources we believe reliable, but we do not guarantee its accuracy, genuineness, etc. and it does not make any representations or warranties, express or implied. The information is not intended to be taken as advice with respect to any individual situation and cannot be relied upon as such. Readers are cautioned not to place undue reliance on any historical, current, or forward-looking statements. We do not undertake any obligation to update or revise publicly any historical, current, or forward-looking statements, whether as a result of new information, research, future events, or otherwise.
References
Copernicus Climate Change Service (C3S) (2017): ERA5: Fifth generation of ECMWF atmospheric reanalyses of the global climate. Copernicus Climate Change Service Climate Data Store (CDS), Date of Access: 13/8/22. https://cds.climate.copernicus.eu/cdsapp#!/home
Fiedler, T., Pitman, A. J., Mackenzie, K., Wood, N., Jakob, C., & Perkins-Kirkpatrick, S. E. (2021). Business risk and the emergence of climate analytics. Nature Climate Change, 11(2), 87-94.
Fyfe, J.; Fox-Kemper, B.; Kopp, R.; Garner, G. (2021): Summary for Policymakers of the Working Group I Contribution to the IPCC Sixth Assessment Report - data for Figure SPM.8 (v20210809). NERC EDS Centre for Environmental Data Analysis: 25/04/2023
doi:10.5285/98af2184e13e4b91893ab72f301790db
IPCC, 2021: Summary for Policymakers. In: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 3−32, doi:10.1017/9781009157896.001.
Jewson, S. (2021). Conversion of the Knutson et al. tropical cyclone climate change projections to risk model baselines. Journal of Applied Meteorology and Climatology, 60(11), 1517-1530.
Knutson, T., Camargo, S. J., Chan, J. C., Emanuel, K., Ho, C. H., Kossin, J., … & Wu, L. (2020). Tropical cyclones and climate change assessment: Part II: Projected response to anthropogenic warming. Bulletin of the American Meteorological Society, 101(3), E303-E322.
Ministry of Environment Denmark. (2022) Available at: Denmark: Copenhagen can now cope with a 1,000-year rainfall | Prevention Web Accessed: 16 April 2023
Swiss Re. (2022). Available at: Hurricane Ian drives natural catastrophe year-to-date insured losses to USD 115 billion, Swiss Re Institute estimates | Swiss Re Accessed: 14 April 2023
World Economic Forum. (2023). Available at: WEF_Global_Risks_Report_2023.pdf (weforum.org) Accessed: 14 April 2023.
Climate Change Physical Risk in Catastrophe Modelling
Volume 01, article 01
June 17, 2023
Author(s): Jessica Turner
Tags: Comments
DOI: 10.63024/znc8-y74s