Today, there is much debate about catastrophe and climate risk modeling within the re/insurance industry particularly in California.
Some years ago, Roger Pielke Jr and I wrote a paper on the truthiness of hurricane catastrophe models. Truthiness, said the brilliant Stephen Colbert, is a felt truth; it comes from the gut, less so from facts.
Roger and I argued that hurricane risk is underdetermined. The data record supports many theories about hurricane behavior past, present, and future. How estimates of hurricane risk are calculated reflect worldviews, organizational value goals, and political considerations. This is necessarily so because science alone does constrain the decision space to one option.
Our abstract:
In recent years, US policy makers have faced persistent calls for the price of flood and hurricane insurance cover to reflect the true or real risk. The appeal to a true or real measure of risk is rooted in two assumptions. First, scientific research can provide an accurate measure of risk. Second, this information can and should dictate decision-making about the cost of insurance. As a result, contemporary disputes over the cost of catastrophe insurance coverage, hurricane risk being a prime example, become technical battles over estimating risk. Using examples from the Florida hurricane rate-making decision context, we provide a quantitative investigation of the integrity of these two assumptions. We argue that catastrophe models are politically stylized views of the intractable scientific problem of precise characterization of hurricane risk. Faced with many conflicting scientific theories, model theorists use choice and preference for outcomes to develop a model. Models therefore come to include political positions on relevant knowledge and the risk that society ought to manage. Earnest consideration of model capabilities and inherent uncertainties may help evolve public debate from one focused on “true” or “real” measures of risk, of which there are many, toward one of improved understanding and management of insurance regimes.
Statistician George Box is frequently cited for his observation that all models are wrong but some may be useful. Less discussed about his 1976 essay on the subject is his argument that,
Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad.
Models may be importantly wrong in various ways including in choice of inaccurate or implausible assumptions. These can have meaningful implications for how problems are understood for real world policymaking.
Box called this “Mathematistry,” a phenomenon “characterized by development of theory for theory's sake, which since it seldom touches down with practice, has a tendency to redefine the problem rather than solve it.” He continued:
Furthermore, there is unhappy evidence that mathematistry is not harmless. In such areas as sociology, psychology, education, and even, I sadly say, engineering, investigators who are not themselves statisticians sometimes take mathematistry seriously. Overawed by what they do not understand, they mistakenly distrust their own common sense and adopt inappropriate procedures devised by mathematicians with no scientific experience.
Fast forward 50 years and the distinction between scientist, mathematician, and financial quant is not so easy to make anymore. Theory for political sake is rampant and we live in a world of impenetrable technical spectacle built on mathematistry.
Others are now finding truthiness across the risk analytics industry.
Carbon Plan compared two private vendor climate risk models finding that higher levels of model agreement mask large disagreement at the asset level and that industry positions itself around risk estimates that make sense for itself but does not necessarily make sense for the public.
The result is an impenetrable front of technical mystique:
An insurance company, for example, might provide a climate risk assessment to a regulator, or a regulated company might include their risk in a financial disclosure. If those companies accessed multiple, differing climate risk assessments, like the two we examined here, they might choose to report the assessment most in their financial interest. They could select higher-risk estimates to justify premium rate hikes, for example, or select lower-risk estimates to assuage concerns from potential investors — all under the fiction that these numbers are absolute facts.
Elsewhere, researchers with their own flood model business takes aim at the influential First Street Foundation/First Street Tech model. These researchers find that at the county scale, there is only a 25% chance that their model and the FSF model are in agreement. They argue that,
these differences point to limited capacity of FSF data to confidently assess which municipalities, social groups, and individual properties are at risk of flooding within urban areas. These results caution that national-scale model data at present may misinform urban flood risk strategies and lead to maladaptation, underscoring the importance of refined and validated urban models.
These research activities garnered a flashy story in Bloomberg Green and Bloomberg underwent its own investigation on Los Angeles. The cross referencing among these groups suggests deeper political effort to cast a shadow on what the White House PCAST regarded as a climate risk analytics industry that is providing information “of questionable quality.”
Interestingly, members of the writing team coming up with this finding statement work for or with the climate risk analytics industry and insurers. The report aims to embed the government deeper into the climate change extreme weather risk analytics.
Roger and I explain how financial risk models are ripe for this realm of political and strategic gaming:
Model output is a type of quantitative hypothesis resulting from a select compilation of scientific theories about how the world works, and it is intended to function as general as if information for what-if planning. In turn, decision makers may use the information to consider blunt impacts of loss on society and the economy. But this also means the models lack the accuracy and detail needed to advise precisely on day-to-day business decisions such as rates and capital requirements. At this fine decisionmaking scale, model output more closely reflects the noise of politics as usual and researchers scrambling to explain an uncertain world. Catastrophe models offer qualitative comfort through accuracy without precision.
In the 1990’s catastrophe models were rapidly and widely introduced into the industry. Yet, they’ve always been regarded as wrong or substantially uncertain but useful for organizing information and business activity, and in any case- it’s the tool available.
Over time, model assumptions started to change to accommodate different theories about climate variability and industry interests.
Karen Clark, the original catastrophe modeler, was once known for her quip that the way industry was applying catastrophe modeling was like doing brain surgery with a chainsaw.
In a 2011 interview Clark explained,
Companies should not be lulled into a false sense of security by all the scientific jargon which sounds so impressive because in reality… the science underlying the models is highly uncertain and it consists of a lot of research and theories, but very few facts.
In the face of a collection of wrong models each with their own special sauce, the common practice is to blend them, often as a straight average. A practice, on which Clark explained: “The average of multiple wrong numbers still gives a wrong number.”
Of course, the practice of averaging model output is not to get a right number; it is technocratic means of equitable compromise. Given a collection of wrong models where every vendor is equally sophisticated averaging model output prevents playing favorites. This works in the public space such as regulatory ratemaking. Other methods of blending exist where equal representation is not a priority- say for instance, when developing a market position.
There is much reliving of this kerfuffle from about 15 years ago as risk modelers have turned towards the explicit incorporation of climate change assumptions.
There are now a whole new suite of ways to be importantly wrong.
Later this month, California Dept. of Insurance will have a hearing on the introduction of catastrophe models into the ratemaking process for wildfire. The department has defined the problem as one of climate change (long story short- the problem is 30 years in the regulatory making).
However, insurance data and analytics behemoth Verisk reports that the rise in losses is a product of exposures and inflation. Verisk attributes 1% of losses to climate change.
Detecting a climate change signal in global losses from extreme events again raises the challenge of separating annual variability from more subtle longterm shifts due to increasing catastrophes. This is even more difficult, given the added complications of changes in exposure and inflation…
Verisk’s reporting does not support Lara’s assertion that California’s loss problem is a climate change fueled one. The misguided framing however is not surprising. Compare what Verisk says about climate change and losses, and common headlines on the same subject:
Climate change advocates, media, lawsuits, politicians, and financial interests align to make a terrible muddle out of the science on disasters, economic losses, and climate change. Meanwhile, the public has a real insurance problem.
Cat models are important for risk management. Climate change risk modeling may be important for infrastructure planning and risk mitigation decisions.
Model uncertainty provides for freedom of choice and opportunity for inclusion of diverse perspectives in debates about insurance and risk mitigation. Developing process and outcome accountability mechanisms are key.
Catastrophe risk models are a highly technical means of communicating ideas about risk. They are a good starting point for negotiations about assumptions about the future and trade-offs present in different risk management regimes.
Allow me to thank you for this article.
In pushing back against a prophecy of doom which from the outset had no objective evidence to support it we need all the analysis we can get from honest brokers.
Man-made accelerated global warming driven by no other mechanism than the human use of fossil fuel bringing about an incremental increase in a trace atmospheric gas has absolutely no observational evidence - quite the contrary.
Once the cry of "fire" was heard in the darkened theatre, the panic has spread and escalated giving rise to multiple assertions from multiple directions escalating the panic and permitting more and more absurd and damaging plans to avert the disaster predicted.
All efforts to discover and spread the truth at this time are extremely valuable.
Remember, the IPCC models that have NOT been authenticated have been used to predict certain results over definite time periods in the future.
EACH time the prediction has been wrong, wrong in the wrong direction (too much warming) and more wrong each successive prediction to now.