How do we tell the difference between natural climate variations—
both free and forced—and those that are caused by our
own activities?
One way to tell the difference is to make use of the fact that
the increase in greenhouse gases and sulfate aerosols dates back
only to the Industrial Revolution of the nineteenth century:
before that, the human influence is probably small. If we can
estimate how climate changed before this time, we will have
some idea of how the system varies naturally. Unfortunately,
detailed measurements of climate did not themselves begin in
earnest until the nineteenth century, but there are “proxies”
for certain climate variables such as temperature. These proxies
include the width and density of tree rings, the chemical composition
of ocean and lake plankton, and the abundance and
type of pollen.
Plotting the global mean temperature derived from actual
measurements and from proxies going back a thousand years
or more reveals that the recent upturn in global temperature is
truly unprecedented: the graph of temperature with time shows
a characteristic hockey-stick shape, with the business end of the
stick representing the upswing of the last 50 years or so. The
proxies are imperfect, however, and have large margins of error,
so any hockey-stick trends of the past may be masked, but the
recent upturn in global temperature still stands above even a
liberal estimate of such errors.
Another way to tell the difference is to simulate the climate of
the last hundred years or so using computer models. Computer
modeling of global climate is perhaps the most complex endeavor
ever undertaken by humankind. A typical climate model consists
of millions of lines of computer instructions designed to
simulate an enormous range of physical phenomena, including
the flow of the atmosphere and oceans; condensation and precipitation
of water inside clouds; the transport of heat, water,
and atmospheric constituents by turbulent convection currents;
the transfer of solar and terrestrial radiation through the atmosphere,
including its partial absorption and reflection by the
surface, clouds, and the atmosphere itself; and vast numbers of
other processes. There are by now a few dozen such models, but
they are not entirely independent of one another, often sharing
common pieces of computer code and common ancestors.
Although the equations representing the physical and chemical
processes in the climate system are well known, they cannot
be solved exactly. It is computationally impossible to keep track
of every molecule of air and ocean, so to make the task viable,
the two fluids must be divided up into manageable chunks. The
smaller and more numerous these chunks, the more accurate
the result, but with today’s computers the smallest we can make
these chunks in the atmosphere is around 50 miles in the horizontal
and a few hundred yards in the vertical. We model the
ocean using somewhat smaller chunks. The problem here is
that many important processes happen at much smaller scales.
For example, cumulus clouds in the atmosphere are critical for
transferring heat and water upward and downward, but they are
typically only a few miles across and so cannot be simulated by
the climate models. Instead, their effects must be represented
in terms of quantities such as wind speed, humidity, and air
temperature that are averaged over the whole computational
chunk in question. The representation of these important but
unresolved processes is an art form known by the awkward term
parameterization, and it involves numbers, or parameters, that
must be tuned to get the parameterizations to work in an optimal
way. Because of the need for such artifices, a typical climate
model has many tunable parameters that one might think of as
knobs on a large, highly complicated machine. This is one of
many reasons that such models provide only approximations
to reality. Changing the values of the parameters or the way the
various processes are parameterized can change not only the
climate simulated by the model, but also the sensitivity of the
model’s climate to, say, greenhouse gas increases.
How, then, can we go about tuning the parameters of a climate
model so that it serves as a reasonable facsimile of reality?
Here important lessons can be learned from our experience
with those close cousins of climate models, weather-prediction
models. These are almost as complicated and must also parameterize
key physical processes, but because the atmosphere is
measured in many places and quite frequently, we can test the
model against reality several times per day and keep adjusting its
parameters (that is, tuning it) until it performs as well as it can.
In the process we come to understand the inherent accuracy of
the model. But in the case of climate models, there are precious
few tests. One obvious test is whether the model can replicate
the current climate, including key aspects of its variability, such
as weather systems and El Niño. It must also be able to simulate
the seasons in a reasonable way: summers must not be too hot
or winters too cold, for example.
Beyond a few simple checks such as these, however, there
are not many ways to assess the models, and so projections of
future climates must be regarded as uncertain. The amount of
uncertainty in such projections can be estimated to some extent
by comparing forecasts made by many different models, given
their different parameterizations (and, very likely, different sets
of coding errors). We operate under the expectation that the
real climate will fall among the projections made with the various
models—that the truth, in other words, will lie somewhere
between the higher and lower estimates generated by the models.
It is not inconceivable, though, that the actual solution will
fall outside these limits.
While it is easy to stand on the sidelines and take shots at
these models, they represent science’s best effort to project the
earth’s climate over the next century or so. At the same time, the
large range of possible outcomes is an objective quantification of
the uncertainty that remains in this enterprise. Still, those who
proclaim that the models are wrong or useless usually are taking
advantage of science’s imperfections to promote their own
prejudices. Uncertainty is an intrinsic feature of prediction, and
it works in both directions.
Figure 2 shows the results of two sets of computer simulations
of the global average surface temperature during the
twentieth century, using a particular climate model. In the first
set, denoted by the dotted line and lighter shade of gray, only
natural, time-varying forcings are applied. These consist of variable
solar output and “dimming” owing to aerosols produced
by known volcanic eruptions. The second set (dashed line and
darker shade of gray) incorporates human influence on sulfate
aerosols and greenhouse gases. Each set of simulations is run
four times beginning with slightly different initial states, and the
range of outcomes produced is denoted by the shading in the
figure. This range reflects the random fluctuations of the climate
produced by this model, while the bold curves show the average
of the four ensemble members. The observed global average surface
temperature is depicted by the black curve. The two sets of
simulations diverge during the 1970s and have no overlap at all
today. The observed global temperature also starts to fall outside
the envelope of the all-natural simulations in the 1970s.
This exercise has been repeated using many different climate
models, with the same qualitative result: one cannot accurately
simulate the evolution of the climate over the last 30 years
without accounting for the human input of sulfate aerosols and
greenhouse gases. This is one (but by no means the only) important
reason that almost all climate scientists today believe that
man’s influence on climate has emerged from the background
noise of natural variability. But the main reason remains the
elementary physics that Arrhenius used to predict the global
response to increasing greenhouse gases, long before the computer
age.
Bookmarks