It's exhilarating to see the fruits of climate research achieve such prominence in the media,
political debate, and concerns of industrial and municipal stakeholders. As scientists, though,
it's incumbent upon us not to mislead the lay audience by blurring the line between methodological
investigation and end products ready for consumption.
I should begin by disclosing that as a former project scientist at the
National Center for Atmospheric Research, I
was tasked with thinking about how to combine data from different climate models into probabilistic
projections of regional climate change. This notwithstanding, I wholeheartedly agree with Gavin
that these kinds of probabilistic projections aren't appropriate for risk analysis and decision
making under uncertainty and won't be for a long time.
The ideal scenario in climate modeling is a situation where all the models are equally good:
They each account for all the physical and chemical processes we deem necessary to describe Earth's
climate; they all perform satisfactorily down to the finest resolved scale (currently, we have more
confidence in the behavior of these models at the continental scale than at the local scale); and
we have enough observed data (from climate records, ice-core samples, volcanic eruptions, and other
natural phenomena) to prove that their simulations are consistent with what the real experiment
(i.e., climate on Earth) is showing us. At that point, we could take the models' projections at
face value, weigh the results of every model equally, and use their range to bracket our
uncertainty, at least under a given emissions scenario. I'd also be out of a job. Luckily for me,
we're far from that point.
As a statistician, I attempt to make sense of multiple models with a probabilistic treatment,
which weighs models differently based on their different performances. Say I have data on average
precipitation for the last 30 years in the Southwest United States, as well as simulations from 20
different climate models of current and future precipitation in the same region, and I want to know
what the expected change in precipitation will be at the end of this century under a specific
emissions scenario. I can try to account for the fact that the different models have shown
different skill in simulating current precipitation. I can try to formalize the idea that model
consensus is generally more reliable than individual predictions. Ideally, I can factor in the
similar basic design of all the models and look at other experiments carried out with simpler
models or perturbed parameterization to gauge the reliability of their agreement when compared with
alternative modeling choices.
Although challenging, and, at present, only at the stage of methodological inquiry, there's
value in interpreting what multiple models produce by trying to assign them probabilities. The
alternative is that people will use ensemble averages and ensemble standard deviation as a measure
of uncertainty. These quantities are reassuring because they can be easily explained and
understood. But when I compute a mean as my estimate and a standard deviation as its uncertainty,
I'm assuming that each model is producing independent data, and I'm relying on the expectation that
their errors will cancel each other out.
More complicated is a probability distribution function, which characterizes the range of
possible outcomes, assigning relatively higher or lower probabilities to subintervals, and may
distribute the probability asymmetrically within the range. Such analysis is also more informative,
but should I go as far as offering the results to the water managers in Southern California so they
can think about future needs and policies for water resource management?
I wouldn't offer my probabilistic projections as a definite estimate of the area's expected
temperature or precipitation; not in the same way the National Oceanic and Atmospheric
Administration could, until recently, provide the public with "climate normals" based on a large
sample of observed records of a supposedly stationary climate. It would be deceiving to present
these estimates as having the same reliability that's needed to make structural decisions about
designing a floodplain, bridge, or dam.
I think, though, that there's valuable information in a "conditional probability distribution
function" if that's the best representation of the uncertainty
given the data we have nowadays. The caveat is that the range of uncertainty is likely to
be larger than what we see in models, which typically use conservative parameterizations.
Additionally, we don't know which emissions scenario will occur over the next decades, further
widening the potential realm of possible climate outcomes.
Meanwhile, in the real world, I would echo Gavin: Many decisions can be made by looking at
qualitative agreement, general tendencies, and model consensus without the need for quantitative
description of the uncertainties. Temperatures are warming; heat waves will intensify; sea level is
rising; and arid regions will dry further. So planning for worst-case scenarios is only
Helping vulnerable populations access aid centers in the case of extreme heat events, dissuading
construction on coastlines, conserving water resources, and developing drought-resistant crops are
adaptation measures we should pursue regardless of the exact magnitude of the changes in store for