Kerry Emanuel discusses how a combination of CAT modelling, physical weather hazard models, and risk science is fundamental for ensuring better quantitative climate risk estimates
Almost all extant long-term estimates of natural hazard risk are based on historical statistics of the hazards. These estimates lie at the heart of catastrophe (CAT) modelling. There are, however, two rather large problems arising from reliance on historical records.
Firstly, the records are usually too short to make reliable estimates of the relatively rare events (~ 1% annual probability) that usually dominate long-term losses, requiring rather elaborate bootstrapping methods. Secondly, and more importantly, climate change has already rendered many long-term risk estimates obsolete. To take a concrete example, three independent studies (Emanuel 2017; Risser and Wehner 2017; van Oldenborgh et al. 2017) estimated that the risk of flooding in Harris County, Texas, of the magnitude encountered in Hurricane Harvey, had increased by a factor of between 1.5 and 6 in the half century or so leading up to 2017, when the event occurred. Historical records are becoming increasingly less useful in estimating current risks of this kind.
Rare events like Harvey tend to dominate long-term loss statistics. To illustrate this basic point, I examined global tropical cyclone loss data from EM-DAT (EM-DAT 2020) normalised for each tropical cyclone event by the world domestic product (WDP) that year (World Bank 2020). I then constructed a per-event exceedance probability distribution of the base-10 logarithm of global tropical cyclone damage over the period 1900-2020; this is shown in Figure 1a.
Figure 1: Per-event exceedance probability of tropical cyclone damage as a fraction of base-10 logarithm of world domestic product (a), and loss density of same (b). The latter plots the cost itself multiplied by the probability density; the area under the curve is proportional to the average per-event loss.
The normalised tropical cyclone loss probability density (not shown) is approximately log-normal (that is, normally distributed in log space), and the median expected per-event loss is 2.7x10-7 of WDP. Figure 1b shows the corresponding “loss density”, the product of the probability density and the associated loss. The mean event loss is just the integral of the loss density over the base-10 logarithm of the loss, in this case about 4x10-6 of WDP; this is more than a factor of 10 larger than the median expected loss from a single event. This reflects the disproportionate effect of intense events on long-term losses. The Total Value at Risk (TVaR) for the most damaging 5% of events is 5.8x10-5 of WDP, another order-of-magnitude larger than the mean event loss. Note that the loss density peaks at a loss of about 7.6x10-5 corresponding to an event with a return period of about 65 years – about the maximum return period resolvable by this record. Thus it is likely that the expensive tail of the probability distribution is under-resolved and the true annual expected loss underestimated. We shall present some evidence for this shortly.
A much larger problem stems from the fact that climate has already changed appreciably. However inadequate the historical damage data are for estimating expected losses, they are grossly inadequate for estimating damage trends. These trends result from both demographic and climate changes and it is very difficult to quantify the relative importance of these two drivers using historical weather and financial loss data alone.
Numerical weather prediction
To move away from exclusive reliance on historical data, weather hazard risk assessment should take advantage of recent advances in understanding and simulating weather. Numerical weather prediction (NWP) has made enormous strides since its inception in the 1950s, to the point where virtually every weather forecast made today beyond a range of a few hours is based mostly on NWP. Numerical weather prediction consists of numerically integrating, on a computer, the known physical laws governing the motion of fluids, the transfer of solar and terrestrial radiation, and the physics of clouds and precipitation. In 2018, the global accuracy of NWP seven-day forecasts were as skillful as three-day forecasts were in 1981. This is an astounding, if largely unsung, achievement of science.
Many weather hazards are routinely forecast by global NWP models. These hazards include heatwaves, cold snaps, and winter storms. While today’s global NWP models accurately forecast tropical cyclone tracks, they do not have sufficient horizontal resolution to accurately predict storm intensity. To deal with this, smaller scale computational grids that often move with the storm are embedded in the global model. Even smaller scale phenomena, such as severe thunderstorms and tornadoes, cannot be simulated directly by global models; however, the larger-scale conditions conducive to these can be – and are – forecast. These forecasts are used to focus attention on regions where such storms are most likely.
While weather has a fundamental predictability horizon of a few weeks, NWP can still be used to estimate the long-term statistics of weather hazards, but this requires much longer integrations. For long-term risk assessment to capture the important tail risk, we ought to be integrating for roughly 1,000 years. Currently, it is not feasible to run NWP models for this long, but we can hope to run models whose grid spacings are roughly 50 km for 1,000 years. In principle, models of this spatial resolution ought to adequately capture large-scale phenomena such as winter storms, but they are at best marginal for tropical cyclones. To deal with these and other smaller scale phenomena, we resort to combinations of physical and statistical downscaling.
Physical and statistical downscaling
In pure statistical downscaling, one develops relationships between large-scale variables directly simulated by global models and local quantities of interest, such as wind and precipitation. These statistical relationships are developed by comparing global model simulations with actual observations and/or with the output of fine-scale models embedded in the global models. Contemporary machine learning algorithms may greatly improve this endeavour, but applying such methods outside the conditions under which they have been developed/trained can be problematic.
With physical downscaling, we embed much higher resolution regional models within the global model in key places; for example, hurricane belts. Tropical cyclone hazard models that combine some features of both statistical and deterministic techniques are becoming increasingly popular (e.g. Jing and Lin 2020; Lee et al. 2020). Outside the traditional CAT modelling industry, there has been particularly rapid deployment of physical modelling to estimate current future natural hazard risks. For example, the First Street Foundation has estimated current and future flood risk for every property in the continental U.S., using mostly physical modelling (Bates et al. 2021).
One example of (mostly) physical downscaling is a technique my group developed for simulating many thousands or even hundreds of thousands of tropical cyclones driven by the output of global climate analyses or models. Broadly, this involves driving a very high resolution tropical cyclone model by the larger-scale winds, temperature, and humidity output from the global model or climate analysis (Emanuel 2006; Emanuel et al. 2008). The statistics of surface wind speeds and rainfall are in good agreement with historical data.
As an example of the power of physical downscaling, we used nearly 100,000 tropical cyclone events that affect the eastern U.S., driven by output from eight global climate models over the last 20 years of the 20th Century, and over the last 20 years of this century using a middle-of-the-road greenhouse gas emissions scenario. We then used these synthetic hurricane events to estimate damage to a large portfolio of insured property aggregated over 12,968 zip codes, assigning all the value in each zip code to the geographical centroid of that zip code. The fraction of total insured value destroyed in each zip code was estimated by applying the idealised wind damage function of Emanuel et al. (2012) to the peak wind experienced during each hurricane event at each zip code centroid.
Figure 2 shows the annual exceedance probability, and loss density, as in Figure 1, but here the losses are annual and the abscissa is now damage to the aforementioned portfolio of insured property, expressed in U.S. dollars. The blue curves are the results of downscaling the period 1984-2014, while the red curves pertain to the climate from 2070-2100. The shading brackets the 5th and 95th percentiles among the eight climate models downscaled.
Figure 2: Annual exceedance probability (a) and loss density (b) of damage to a particular portfolio of insured property in the eastern U.S., calculated using many thousands of simulated tropical cyclone events from each of eight climate models for the climate of the late 20th Century (blue) and the late 21st Century (red), accounting for global warming. The bold curves show the multi-model means and the shading shows the 5th and 95th quantiles from among the eight models. The vertical black dashed line is the total value of all the properties in the portfolio.
The annual probability shifts toward greater losses as the climate warms, owing to some combination of increased intensity and increased frequency of events. This is accompanied by a very substantial increase in the uncertainty originating in the differences among the eight climate models used in the downscaling; note in particular a strong overlap with the historical exceedance probability. Figure 2b shows that most of the long-term loss arises from the high-intensity tails of the distributions and almost nothing is contributed by the storms that are most likely in any given year. The median annual loss (corresponding to a return period of two years) is only $12 in the historical period, but rises to $4.2 million in the warmer climate. The average annual loss (AAL) increases from $540 million in the historical period to about $4 billion at the end of this century – an increase of almost a factor of 10. The TVaR for the most damaging 5% of years increases from about $11 billion in the historical period to $65 billion by the end of the century. All of these increases are highly uncertain owing to disparate responses of climate models to increasing greenhouse gases.
While this exercise compares a recent historical period to risks at the end of the century, it also has implications for current levels of risk. The mean annual surface temperature of the globe has warmed about 1oC over the last 120 years, compared to the roughly 3oC projected warming by the end of this century. Thus, a substantial fraction of projected warming has already occurred. This is consistent with the findings of three studies, reviewed in the first paragraph of this essay, that the risk of extreme rainfall in Harris County, Texas, has already increased greatly.
The historical North Atlantic hurricane record (Knapp et al. 2010) and application of the same downscaling technique described here to three climate reanalyses over the whole of the 20th Century (Emanuel 2021) show increases in virtually all metrics of Atlantic hurricane activity over this period, interrupted by a hurricane drought in the 1970s and 80s that was probably caused by human sulphur emissions before clean air acts kicked in (Rousseau-Rizzi and Emanuel 2022). The application of historical hurricane data to estimate current hurricane risk faces the dilemma of using the long record of activity and missing its upward trend, or using a more recent but shorter piece of the record and risk missing the rare, high-end events that dominate mean annual loss. We advocate downscaling recent climate reanalyses as the best way to estimate current risk, and using downscaling of climate model projections, such as the one described here, for those few applications that require risk estimates decades into the future.
Risk and resilience
The important contributions that relatively rare events make to long-term risk, coupled with the high sensitivity of such events to climate change, warrant a rapid migration toward using physically based weather hazard models in CAT modelling. This can be accelerated by an appropriate coordinated effort by the CAT modelling and insurance industries, government, and academic institutions. In particular, universities and CAT modelling firms should create programmes to train students and employees to develop and/or use state-of-the-art physical models, supplemented by a rigorous grounding in risk science. The advent of far better quantitative risk estimates that account for climate change and uncertainty could provide a rational foundation for policies to mitigate and adapt to our changing climate.
References
Bates, P. D., and Coauthors, 2021: Combined modelling of US fluvial, pluvial and coastal flood hazard under current and future climates. Water Resources Research., e2020WR028673, https://doi.org/10.1029/2020WR028673.
Emanuel, K., 2006: Climate and tropical cyclone activity: A new model downscaling approach. Journal of Climate., 19, 4797–4802.
Emanuel, K., 2017: Assessing the present and future probability of Hurricane Harvey’s rainfall. Proceedings of the National Academy of Sciences of the United States of America., https://doi.org/10.1073/pnas.1716222114.
Emanuel, K., 2021: Atlantic tropical cyclones downscaled from climate reanalyses show increasing activity over past 150 years. Nature Communications ., 12, 7027, https://doi.org/10.1038/s41467-021-27364-8.
Emanuel, K., R. Sundararajan, and J. Williams, 2008: Hurricanes and global warming: Results from downscaling IPCC AR4 simulations. Bulletin of the American Meteorological Society, 89, 347–367.
Emanuel, K., F. Fondriest, and J. Kossin, 2012: Potential Economic Value of Seasonal Hurricane Forecasts. Weather, Climate, and Society., 4, 110–117, https://doi.org/10.1175/wcas-d-11-00017.1.
EM-DAT, 2020: The Office of U.S. Foreign Disaster Assistance (OFDA)/Centre for Research on the Epidemiology of Disasters (CRED) International Disaster Database. http://www.emdat.be/.
Jing, R., and N. Lin, 2020: An environment-dependent probabilistic tropical cyclone model. Journal of Advances in Modeling Earth Systems, 12, e2019MS001975, https://doi.org/10.1029/2019MS001975.
Knapp, K. R., M. C. Kruk, D. H. Levinson, H. J. Diamond, and C. J. Neumann, 2010: The International Best Track Archive for Climate Stewardship (IBTrACS): Unifying tropical cyclone best track data. Bulletin of the American Meteorological Society, 91, 363–376.
Lee, C.-Y., S. J. Camargo, A. H. Sobel, and M. K. Tippett, 2020: Statistical–dynamical downscaling projections of tropical cyclone activity in a warming climate: two diverging genesis scenarios. Journal of Climate, 33, 4815–4834, https://doi.org/10.1175/jcli-d-19-0452.1.
van Oldenborgh, G. J., and Coauthors, 2017: Attribution of extreme rainfall from Hurricane Harvey, August 2017. Environmental Research Letters, 12, 124009.
Risser, M. D., and M. F. Wehner, 2017: Attributable Human-Induced Changes in the Likelihood and Magnitude of the Observed Extreme Precipitation during Hurricane Harvey. Geophysical Research Letters, 44, 12,457-12,464, https://doi.org/10.1002/2017GL075888.
Rousseau-Rizzi, R., and K. Emanuel, 2022: Natural and anthropogenic contributions to the hurricane drought of the 1970s–1980s. Nature Communications., 13, 5074, https://doi.org/10.1038/s41467-022-32779-y.
World Bank, 2020: World Bank GDP. https://data.worldbank.org/indicator/NY.GDP.MKTP.CD.
Physically Based Weather Hazard Modelling: Accounting for Climate Change
Volume 01, article 02
June 17, 2023
Author(s): Kerry Emanuel
Tags: Comments
DOI: 10.63024/nc7e-qd7t