## On the Use of the Multi-Model Mean

This is a reference post.  I’ll link to it in future model-data comparison posts, so that I don’t have to burden those posts with boilerplate.  The following is from Chapter 1.4 from my recently published ebook Climate Models Fail.

I’ve also tacked on a few paragraphs from Chapter 1.3 at the end because they are referred to in this chapter.

# # #

For the purpose of analyzing the models’ performance, there are sound reasons for using the model mean instead of the oodles of ensemble members:

First, in the words of a prominent climate scientist, Gavin Schmidt:  (Dr. Gavin Schmidt is a climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS.  He is also one of the regular contributors at the website Real Climate.  As they say in their header, “RealClimate, Climate science from climate scientists.”)

If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?

Gavin Schmidt replied (See “Comment 46 and his answer at 30 Sep 2009 at 6:18 AM):

Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).

Gavin Schmidt used “noise” to describe the random internal variations (chaotic weather) created within the model.  He also used “realisation” in place of “ensemble member.” The “forced component”, which is very important to this discussion, represents how the model responds to the inputs used to drive the models, like manmade greenhouse gases.

To paraphrase Gavin Schmidt, the individual ensemble members contain random noise that is inherent in each model. That noise limits the value of each individual ensemble member.  The model mean, through averaging, minimizes model noise, revealing the “forced component”.  In other words, the model mean is the best-guess approximation of the modelers’ assumptions about how climate on Earth is supposed to respond to human-induced global warming.

That’s what we’re after: how the Earth is supposed to respond to human-induced global warming.

Second, is a quote from obsolete NCAR FAQ webpage discussed in Chapter 1.3 [the appropriate portion of Chapter 1.3 follows the summary of Chapter 1.4]:

Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.

Because we’re examining historic model simulations (hindcasts), “scenario” is the past climate on Earth.  The model mean is being presented because it gives the best representation of Earth’s climate, as measure by the average model response to the historic forcings (anthropogenic CO2, etc.) imposed on the models.

CHAPTER 1.4 SUMMARY

The outputs of individual ensemble members (single computer runs) contain noise that is inherent to the models and those outputs are determined by the initial conditions of the simulation.  That noise obscures the modeler’s assumptions.

The model mean, exposes the modelers’ assumptions about how global temperatures, precipitation, sea ice, etc., are supposed to have responded to human-induced climate change.

# # #

A portion of Chapter 1.3 referred to above:

That statement, originally part of an NCAR Frequently Asked Question webpage, was removed from their website.  The in-depth discussion accompanying it was excellent, so I have no idea why it was removed.  The old website address was:

http://www.gisclimatechange.org/faqPage.do

For those who want to confirm the following quotes, that webpage can still be accessed through the internet archive called the WaybackMachine.  Simply cut and paste the above address into the appropriate field at the WaybackMachine and hit “Enter”. Then, select the February 2, 2012 capture.

Research interest: the long-term aftereffects of El Niño and La Nina events on global sea surface temperature and ocean heat content. Author of the ebook Who Turned on the Heat? and regular contributor at WattsUpWithThat.
This entry was posted in Model-Data Comparison - General. Bookmark the permalink.

### 118 Responses to On the Use of the Multi-Model Mean

1. cassidy421 says:

Thanks, Bob. ” a prominent climate scientist, Gavin Schmidt” refers to his job description; his degrees are in math, not science.

2. M.H.Nederlof says:

Bob,
The Model mean does not recognize the autocorrelation within each model run I believe. Whether that makes any difference to your conclusions is doubtful, but to avoid criticism of the logic you might consider the following procedure:
1. Determine the MODAL scenario of model 1, model 2, etc. (The MODE is not necessarily equal to the mean, for instance the distribution of the residuals is possibly not normal) .This is bound to be different from the set of means for the individual years, but it honours any Markov process that is built into the models.
2. Take the mean of the “MODES” for each year. That does not include impossible model scenarios.

How to find a MODAL scenario? Given a set of scenarios over n years, A single run of model1 consists of n values of the variable X ( say temperature anomaly). Imagine a n-dimensional space. Each single run of model1 is a single point in n-dimensional space. Do this for all runs of model 1 and find the centre of gravity for the MODAL scenario. Determine which of the scenarios is closest (Euclidian distance) to this centre of gravity for that model and work back to values per year by reading back from the axes. Then you have the MODAL scenario for model 1. Etc.Etc.

Dr M.H.Nederlof
The Hague,
Netherlands

3. Efstathios says:

Very useful and interesting, thank you.