Yet Even More Nonsense from Grant Foster (Tamino) et al. on the Bias Adjustments in the New NOAA Pause-Buster Sea Surface Temperature Dataset

UPDATE: It was pointed out in a comment that the model-data comparison in the post was skewed. I was comparing modeled marine air temperature minus modeled sea surface temperature anomalies to observed night marine air temperature minus sea surface temperature anomalies. Close, but not quite the same. I’ve crossed out that section and the references to it and removed the graphs. Sorry. It was a last-minute addition that was a mistake. (Memo to self: Stop making last minute additions.) Thanks, Phil.

The rest of the post is correct.

# # #

INTRODUCTION

The saga continues. For those new to this topic, see the backstory near the end of the post.

Grant Foster (a.k.a. Tamino and Hansen’s Bulldog) has written yet another post The Bob about my simple comparison of the new NOAA pause-buster sea surface temperature dataset and the UKMO HADNMAT2 marine air temperature dataset that was used for bias adjustments on that NOAA dataset. In it, he quotes a comment at his blog from Miriam O’Brien, a.k.a. Sou from HotWhopper. Miriam recycled a flawed argument that I addressed over a month ago.

In his post, after falsely claiming that I hadn’t looked for reasons for the difference between the night marine air temperature data and the updated NOAA sea surface temperature data during the hiatus, Grant Foster presented a model that was based on a multivariate regression analysis…in an attempt to explain that difference. Right off the get go, though, you can see that Hansen’s Bulldog lost focus again. He also failed to list the time lags and scaling factors for the individual variables so that his results can be verified. We’re also interested in those scaling factors to see if the relative weighting of the individual components are proportioned properly for a temperature-related global dataset. To overcome that lack of information from Grant Foster, I also used a multivariate regression analysis to determine the factors. I think you’ll find the results interesting.

Last, before presenting his long-term graph of the difference between the HADNMAT2 and ERSST.v4 datasets, Grant Foster forgot to check which ocean surface temperature dataset climate models say should be warming faster: the ocean surface or the marine air directly above it. That provides us with another way to show that NOAA overcooked their adjustments to their sea surface temperature data.

MORE DETAIL

Grant Foster began his recent post with a quote of a comment at his blog from Miriam O’Brien (Sou from Hot Whopper) of all people. That quote begins:

Bob has it all wrong in his now umpteenth post about this. HadNMAT2 is used to correct a bias in ship sea surface temps only. For the period he’s looking at (in fact since the early 1980s), they only comprise 10% of the observations. The rest of the data is from buoys, and HadNMAT doesn’t apply to them. They are much more accurate than ship data anyway. So much so that if ship and buoy data are together, the buoy data is given six times the weighting of ship data. So the comparison Bob thinks he’s making is completely and utterly wrong. And not just because the trends is actually quite close. He is not comparing what he thinks he is comparing.

Contrary to Miriam’s misinformation, I know exactly what I’m comparing. And as we’ll discuss in a few moments, I also know exactly why I’m comparing them. We discussed it a month ago, but alarmists are notorious for short memories.

The revisions to the NOAA sea surface temperature dataset are discussed in detail in the papers:

In the abstract of Part 1, Huang et al specifically state that there are two bias adjustments (both of which impact the pause):

The monthly Extended Reconstructed Sea Surface Temperature (ERSST) dataset, available on global 2° × 2° grids, has been revised herein to version 4 (v4) from v3b. Major revisions include updated and substantially more complete input data from the International Comprehensive Ocean–Atmosphere Data Set (ICOADS) release 2.5; revised empirical orthogonal teleconnections (EOTs) and EOT acceptance criterion; updated sea surface temperature (SST) quality control procedures; revised SST anomaly (SSTA) evaluation methods; updated bias adjustments of ship SSTs using the Hadley Centre Nighttime Marine Air Temperature dataset version 2 (HadNMAT2); and buoy SST bias adjustment not previously made in v3b.

So Miriam O’Brien is correct that the HADNMAT2 data are only used for ship bias adjustments. However, she incorrectly concludes that my reasons are wrong. Suspecting that someone would attempt the argument she’s using, I addressed that subject more than a month ago in my open letter to Tom Karl:

Figure 1

Figure 1

Someone might want to try to claim that the higher warming rate of the NOAA ERSST.v4 data is caused by the growing number of buoy-based versus ship-based observations. That logic of course is flawed (1) because the HadNMAT2 data are not impacted by the buoy-ship bias, which is why NOAA used the HadNMAT2 data as a reference in the first place, and (2) because the two datasets have exactly the same warming rate for much of the period shown in Figure 1. That is, the trends of the two datasets are the same from July 1998 to December 2007, a period when buoys were being deployed and becoming the dominant in situ source of sea surface temperature data. See Figure 2.  

Figure 2

Figure 2

[End of copy from earlier post]

That text was included with my first presentation of the graph their they’re complaining about.

Grant Foster returned to Miriam O’Brien’s claim in one of his closing paragraphs:

Of course the salient point is what was pointed out by Sou, that the comparison Bob thinks he’s making is completely and utterly wrong.

As noted above, the comparison I’m making is not wrong. It’s being done for very specific reasons.  Miriam O’Brien and Grant Foster might not like those reasons, but those reasons are sound.

ON TAMINO’S MODEL OF THE DIFFERENCE BETWEEN THE HADNMAT2 AND ERSST.v4 DATA

Grant Foster begins the discussion of his modeling efforts with (my boldface):

Here’s the data that has The Donald The Bob in a tizzy, where I’ve computed the difference between NMAT and ERSSTv4:

Tamino Graph

I’ve circled the earliest part, from 1998, where NMAT is higher than ERSSTv4. It’s one of the main reasons that The Bob found a lower trend for NMAT than for ERSSTv4 since 1998. The difference since 1998 shows an estimated trend of -0.0044 deg.C/yr, the same value The Bob found for the difference in their individual trend rates.

What The Bob didn’t bother to do is wonder, why might that be? Just because there are differences between NMAT and sea surface temperature, that doesn’t mean the people estimating SST have rigged the game; why, there might even be an actual, physical reason for it.

What’s so special about 1998? The Bob wants us to believe it’s because of that non-existent “hiatus”. But let’s not forget that 1998 was the year of the big el Niño. Which made me wonder, might that have affected the difference between NMAT and sea surface temperature? What about aerosols from volcanic eruptions? What about changes in solar radiation?

Contrary to Grant Foster’s claims, I not only wondered, I discussed that difference more than a month ago in my open letter to Tom Karl. In a continuation of my earlier discussion from that post:

In reality, the differences in the trends shown in Figure 1 are based on the responses to ENSO events. Notice in Figure 1 how the night marine air temperature (HadNMAT2) data have a greater response to the 1997/98 El Niño and as a result they drop more during the transition to the 1998-01 La Niña. We might expect that response from the HADNMAT2 data because they are not infilled, while the greater spatial coverage of the ERSST.v4 data would tend to suppress the data volatility in response to ENSO. We can see the additional volatility of the HadNMAT2 data throughout Figure 2. At the other end of the graph in Figure 1, note how the new NOAA ERSST.v4 sea surface temperature data have the greater response to the 2009/10 El Niño…or, even more likely, they have been adjusted upward unnecessarily. The additional response of the sea surface temperature data to the 2009/10 El Niño is odd, to say the least.

[End of copy from earlier post]

Back to Grant Foster’s discussion of his model. He continued:

To investigate, I took the difference between NMAT and ERSSTv4, and sought to discover how it might be related to el Niño, aerosols, and solar output. As I’ve done before, I used the multivariate el Niño index to quantify el Niño, aerosol optical depth for volcanic aerosols, and sunspot numbers as a proxy for solar output. I allowed for lagged response to each of those variables. I also allowed for an annual cycle, to account for possible annually cyclic differences between the two variables under consideration.

The available data extend from 1950 through 2010, but I started the regression in 1952 to ensure there was sufficient “prior” data for lagged variables. It turns out that all three variables affect the NMAT-ERSSTv4 difference. Here again is the difference, this time since 1952, compared to the resulting model:

Tamino Graph 2

It turns out that the model explains the NMAT-ERSSTv4 differences rather well, particularly the high value in 1998 as mostly due to the el Niño of that year.

Grant Foster didn’t supply the information necessary to support his claim that “It turns out that all three variables affect the NMAT-ERSSTv4 difference.” According to his model, they had an impact together, but he did not show the individual impacts of the three variables.

Contrary to Grant Foster’s claim, the extended Multivariate ENSO Index (MEI), the SIDC sunspot data and the GISS stratospheric aerosol optical depth data extend back in time to 1880. Then again, the farther back in time we go, the less reliable the data become.  The 1950s were a reasonable time to start the regression analysis.

Looking back at Grant Foster’s first graph, he subtracted the ERSST.v4 sea surface temperature data from the reference HADNMAT night marine air temperature. However, it’s much easier to see the warm bias in NOAA’s pause-buster data, and the timing of when it kicks in, if we do the opposite, subtract the reference HADNMAT2 data from the ERSST.v4 data. See my Figure 3. That way we can see that the difference after 2007 had a greater impact on the warming rates than the difference before July 1998. Not surprisingly, Grant Foster was focused on the wrong end of the graph with his circle at 1998.

Figure 3

Figure 3

But for the rest of this discussion, we’ll return to the way Grant Foster has presented the difference. Keep in mind, though that a negative trend means NOAA’s sea surface temperature data are rising faster than the reference night marine air temperature data.

GRANT FOSTER LOST FOCUS AGAIN

While Grant Foster was right to use longer-term data (1952 to 2010) for his regression analysis, he only presented illustrations of the results for that period. But the topic of discussion is the period of 1998 to 2010, the slowdown in global warming. Looking at his model-data graph above, it’s difficult to see how well his model actually performs during the hiatus. He claims, though:

It turns out that the model explains the NMAT-ERSSTv4 differences rather well, particularly the high value in 1998 as mostly due to the el Niño of that year.

Rather well is relative. His model is based on a multiple regression analysis, which takes the data for the dependent variable (the HADNMAT2-ERSST.v4 difference) and determines the weighting of the independent variables (the ENSO index, the sunspot data and the volcanic aerosol data) that, in effect, provide the best fit for the dependent variable. (Grant Foster’s regression analysis also shifts the independent variables in time in that effort.) So we would expect the model to explain some of “the NMAT-ERSSTv4 differences rather well”. But we can also see that the model misses many other features of the data.  Additionally, the regression analysis doesn’t care whether the relative weightings of the independent variables make sense on a physical basis. (More on that topic later.)

Grant Foster praised his model because it captured the 1997/98 El Niño. Of course the model makes an uptick in 1997/98. One of the components of the model is an ENSO index.

But what about the rest of the hiatus? Why didn’t he focus on that? He made a brief mention of it in his post, and we’ll present that in a moment.

His model also makes an uptick for the 2009/10 El Niño, but the data, unexpectedly, move in the opposite direction, indicating the NOAA sea surface temperature warmed more than the HadNMAT2 data, when according to his model, it should have warmed less…as noted earlier. (See my discussions of Figures 1 and 2 in this post again.)

GRANT FOSTER DIDN’T PRESENT THE SCALING FACTORS OR TIME LAGS FOR THE INDEPENDENT VARIABLES: THE SUNSPOTS, THE ENSO INDEX AND THE VOLCANIC AEROSOL DATA

As mentioned in the opening of this post, if Grant Foster had provided the scaling factors of the independent variables, we could check his results and see whether the relative weightings of the ENSO index, the sunspot data and the volcanic aerosol data make sense on a physical basis. Also of concern are the time lags determined by his regression analysis. Unfortunately, Grant Foster, as of the time of this writing, failed to provide any of that valuable information.

THE RESULTS OF ANOTHER MULTIPLE REGRESSION ANALYSIS

Because Grant didn’t supply that information, I used the Analyse-It software to perform a separate multiple regression analysis of the HADNMAT2-ERSST.v4 difference, using the same independent variables: NOAA’s Multivariate ENSO Index (MEI), the SIDC sunspot data, and the GISS aerosol optical depth data. The MEI and SIDC data are available at the KNMI Climate Explorer, specifically on the Monthly Climate Indices webpage, and the GISS aerosol optical thickness data are available here.

Unlike Grant Foster’s software, the Analyse-It software does not determine the best time lags for the independent variables. So I used the average of the time lags (4 months for the ENSO index, 1 month for the sunspot data, and 6 months for the volcanic aerosols) listed in Table 1 of Foster and Rahmstorf (2011), of which Grant Foster was lead author. If Grant Foster provides us with the scaling coefficients and time lags his model found, I’ll be happy to redo this.

Using EXCEL, I created a model for the period of 1952 to 2010, shown in Figure 4. The scaling factors and time lags are listed on it. Before determining their difference, I used the base years of 1952 to 2010 for the HADNMAT2 and ERSST.v4 anomalies, to assure that my choice of base years didn’t skew the results. You might get slightly different scaling coefficients using different base years, but the results should be similar.

Like Grant Foster’s model, we would expect the model to mimic parts of the data, because the regression analysis determined the weightings of the three variables (the ENSO index, the sunspot data and the volcanic aerosol data) that furnished the “best fit”. Example: Because my model, like his, uses the Multivariate ENSO index as one of its independent variables, it too creates an uptick at the 1997/98 El Niño, but it also shows the unexpected divergence between the model and data at the end in response to the 2009/10 El Niño. The monthly variations in the HADNMAT2-ERSST.v4 difference in my graph are not the same as Grant Foster’s, but that could be caused by the use of different base years for anomalies. Grant Foster’s model also appears to have greater year-to-year variations, but since he didn’t bother to provide the necessary information for us to duplicate his efforts, we’ll have to rely on my results.

Figure 4

Figure 4

But also like Grant Foster’s model, we can see that my model also misses many of the features of the data.

Keep in mind that everything shown before 1998 in Figure 4 has no real bearing on our discussion of the hiatus, which is the topic of this debate. I’ve presented Figure 4 to show that my results are similar to Grant Foster’s. Figure 5 includes only the results of the regression analysis we’re interested in…for the period of 1998 to 2010. The top graph presents the “raw” model and data, and in the bottom graph, the model and data have been smoothed with 12-month running-mean filters.

Figure 5

Figure 5

Based on the linear trend lines, it appears that a portion of the HADNMAT2-ERSST.v4 difference could be—repeat that, could be—caused by the impacts of ENSO, the solar cycle, and volcanic aerosols…assuming that the scaling coefficients of the ENSO index, the sunspot data and the volcanic aerosol data relative to one another are realistic.

And we can also see that the greater divergence between model and data occurs during the 2009/10 El Niño, not the decay of the 1997/98 El Niño. So, again, Grant Foster was looking at the wrong El Niño.

WHAT GRANT FOSTER HAS TO SAY ABOUT THE MODEL-DATA DIFFERENCE DURING THE HIATUS

Grant Foster went a step farther and subtracted the model output from the data to determine the residuals, once again using the longer-term results, not the results for the hiatus.

Here are the residuals from the fit:

Tamino Graph 3

If we study only the residuals since 1998, by golly the estimated trend is still negative. But only by -0.0018 deg.C/yr (not -0.0044), a value which is not statistically significant. So much for The Bob’s “much lower.”

But, as noted above, Grant Foster failed to show something about his model: whether the scalings of the ENSO index, the sunspot data and the volcanic aerosol data are realistic. So we have to return to my results.

THE RELATIVE WEIGHTINGS OF THE INDEPENDENT VARIABLES

Before we present the results for the difference between the sea surface temperature and night marine air temperature data, let’s look at the results for a global surface temperature dataset so we can see what we should expect.

I used detrended monthly GISS Land-Ocean Temperature Index data (from 1952 to 2010) in the multiple regression analysis, along with the same three independent variables. The time lags were the same, with the exception of the Aerosol Optical Depth data, which are shown as 7 months for the GISS data in Table 1 of Foster and Rahmstorf (2011). Figure 6 presents the three independent variables multiplied by the scaling coefficients that were determined by the regression analysis.

Figure 6

Figure 6

As expected, the two greatest sources of year-to-year fluctuations in global surface temperatures are ENSO (El Niño and La Niña) events and stratospheric aerosols from catastrophic explosive volcanic eruptions. (Note: The regression analysis cannot determine the long-term aftereffects of ENSO…the Trenberth Jumps…they only indicate, based on statistical analysis, the direct linear effects of ENSO on global surface temperatures. For a further discussion on how linear regression analyses miss those long-term warming effects of ENSO, see the post here.) As expected, according to the regression analysis, the eruption of El Chichon in 1982 had a slightly greater impact on surface temperatures than the 1982/83 El Niño. And as expected, according to the regression analysis, the effects of the decadal variations in the solar cycle are an order of magnitude less than the impacts of ENSO and strong volcanic eruptions.

We’ve seen discussions of the relative strengths of the impacts of those variables for decades. Of all weather events, El Niño and La Niña events have the greatest impacts on annual variations in surface temperature. The only other naturally occurring factors that can be stronger than them are catastrophic explosive volcanos. On the other hand, the decadal variations in surface temperatures due to the solar cycle are comparatively tiny compared to ENSO and volcanos.

That’s what we should expect!

But that’s not what was delivered with the regression analysis of the HADMAT2-ERSST.v4 difference. See Figure 7. The relationship between our ENSO index and the volcanic aerosols appears relatively “normal”. BUT (and that’s a great big but) the decadal variations in the modeled impacts of solar cycles are an order of magnitude greater than we would expect. According to the regression analysis, for example, the maximum of Solar Cycle 19 (starts in the mid-1950s) is comparable in strength to the impacts of the El Niño events of 1982/83 and 1997/98.

Figure 7

Figure 7

Maybe that’s why Grant Foster didn’t supply the scaling coefficients or the time lags so we could investigate his model.  If his results are similar to mine, the trend of his model during the hiatus depends on unrealistically strong solar cycle impacts.

Note: You may be thinking that there might actually be a physical explanation for the monstrously excessive response of the HADMAT2-ERSST.v4 difference to the solar cycle data. Keep in mind, though, that the response to volcanic aerosols is solar related…inasmuch as the volcanic aerosols limit the amount of solar radiation reaching the surface of the oceans. You can argue all you want about whatever it is you want to argue about, but unless you can support those claims with data-based analysis or studies, all you’re providing is conjecture.   We get enough model-based conjecture from the climate science community—we don’t need any more.

THE RELATIVE WEIGHTINGS OF THE INDEPENDENT VARIABLES DURING THE HIATUS

Let’s return to the what-we-would-expect and what-we-got format but this time zoom in on the independent variables for the hiatus period of 1998 to 2010.  That is, the data are the same as in Figures 6 and 7.  We’ve just shortened the timeframe to the years of the hiatus.

We’ll again start with the detrended GISS land ocean surface temperature data. Figure 8 presents those scaled independent variables for the hiatus, 1998 to 2010. Looking at the linear trend lines, we would expect the impacts of ENSO to be greatest, followed by the sunspot data due to the change from solar maximum to minimum in that timeframe.   The volcanic aerosols are basically flat and had no impact.

Figure 8

Figure 8

Again, that’s what we would expect from a global temperature-related metric.

But that’s not what we got from the regression analysis of the HADMAT2-ERSST.v4 difference. See Figure 9. During the hiatus, the regression analysis suggests that the change from solar maximum to minimum had the greatest impact on the trend, noticeably larger than the trend of the linear impacts of ENSO.

Figure 9

Figure 9

If Grant Foster’s model has the same physically unrealistic weighting of its solar component, then his residuals during the hiatus are skewed. All things considered, it appears as though the only way to create the trend Grant Foster found is with a model that grossly exaggerates the impacts of the solar cycle.

MORE MISINFORMATION FROM MIRIAM O’BRIEN (SOU AT HOTWHOPPER)

On the thread of Grant Foster’s post The Bob, which as a reminder was the subject of this post, Miriam O’Brien graces us once again with irrelevant information. See her July 22, 2015 at 12:52pm comment here. It reads (my boldface):

Thanks, Tamino.

I removed some charts from the HW article before posting it, because it was getting a bit too long. However if anyone’s interested, the charts I took out were from a 2013 paper by Elizabeth Kent et al. Figure 15 had some charts that plotted the difference between sea and air temps (HadSST and HadNMAT2 and some other comparisons).

http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50152/pdf

There’s no reason to expect night time air temperature would follow the same trend as the sea surface temperature exactly, though it’s fairly close. I also came across articles which discussed the diurnal variation – some places can have different trends in daytime vs night temps in sea surface temps. That is, other than the obvious where ships exhibit a maritime heat island effect during the day (which is why the night time marine air temps are used, not the day time ones).

More misdirection from Miriam.  The topic of discussion is NOAA’s ERSST.v4 data, not the UKMO’s HadSST3 dataset, which differs greatly from NOAA’s ERSST.v4 data.

In fact, for the period of 1998 to 2010, the same basic disparity in warming rates (as HADNMAT2 Versus ERSST.v4) exists between the UKMO HADSST3 and NOAA’s ERSST.v4 data. See Figure 10. Now recall that the UKMO’s HadSST3 sea surface temperature data are also adjusted for ship-buoy bias. In other words, the UKMO’s sea surface temperature and night marine air temperature datasets basically show the same warming rate from 1998 to 2010, and those warming rates are both well below the warming rate of the overcooked NOAA ERSST.v4 data.

Figure 10

Figure 10

And while the HadNMAT2 data ends in 2010, the HadSST3 data are updated to present times, May 2015. So we can see that the excessive warming rate of the ERSST.v4 data continues during the hiatus.

Figure 11

Figure 11

My thanks to Miriam O’Brien for reminding me to illustrate that disparity between the two sea surface temperature datasets that have both been adjusted for ship-buoy bias.

Someone is bound to ask, who made the UKMO data the bellwether? NOAA did. NOAA used the HADNMAT2 data for bias adjustments since the 1800s. See Huang et al. (2015) Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4), Part I. Upgrades and Intercomparisons

Consider this: NOAA could easily have increased the warming rate of their reconstructed sea surface temperature data to the warming rate exhibited by the UKMO’s sea surface and marine air temperature datasets for the period of 1998 to 2012. Skeptics would have complained but NOAA would then have had two other datasets to point to. But NOAA didn’t. NOAA chose to overcook their adjustments…in an attempt to reduce the impacts of the slowdown in global sea surface warming this century.

(Side note to Miriam O’Brien: I was much entertained by the ad hom-filled opening two paragraphs of your recent post Biased Bob Tisdale is all at sea. And the rest of your post was laughable. Thank you for showing that you, like Grant Foster, can’t remain on topic, which, since you obviously can’t recall, is the hiatus. As a reminder, here’s the title of the Karl et al. (2015) paper that started this all Possible artifacts of data biases in the recent global surface warming HIATUS. <– Get it?)

ABOUT THE STATISTICAL SIGNIFICANCE, OR LACK THEREOF, OF THE HADNMAT2-ERSST.V4 DIFFERENCE

This post, like the earlier posts about this topic, is sure to generate discussions about the statistical significance in the difference between NOAA’s ERSST.v4 data and the UKMO HADNMAT2 data…that the difference is not statistically significant.

Those discussions help to highlight one of the problems with the surface temperature datasets. Every 6 months, year, two years, the suppliers of surface temperature data make minor (statistically insignificant) changes to the data. Each time, the supplier can claim the change isn’t statistically significant.  But over a number of years, NOAA has done its best to eliminate the slowdown in global warming by making a series of compounding statistically insignificant changes. If the foundation of the hypothesis of human-induced global warming was not so fragile, NOAA would not have to constantly tweak the data to show more warming…and more warming…and more warming.

Regardless of whether or not the difference between NOAA’s ERSST.v4 data and the UKMO HADNMAT2 data is statistically significant, it is an easy-to-show example of one of the compounding never-ending NOAA data tweaks, so I’ll continue to show it.

ANOTHER EXAMPLE OF NOAA OVERCOOKING THEIR NEW SEA SURFACE TEMPERATURE DATASET

Grant Foster insisted on looking at data that extended back in time before the hiatus in this post and past posts. The slight negative trend in the HADNMAT2-ERSST.v4 difference since 1952, Figure 12, indicates the new ERSST.v4 sea surface temperature data have a slightly higher warming rate than the reference HADNMAT2 night marine air temperature data over that time period. In other words, according to NOAA, global sea surfaces warm faster than the marine air directly above the ocean surfaces.

(Graph removed)

Figure 12

And that reminded me of the climate models used by the IPCC. It’s always good to remember what climate models say, in theory, should be taking place, because they indicate the opposite should be happening. According to the groupthink (the consensus) of the climate modeling groups around the globe, known as the multi-model mean, the marine air should be warming faster than the ocean surface. See Figure 13.

(Graph removed.)

Figure 13

How could NOAA have overlooked a basic fundamental like that? Or is NOAA suggesting the ocean-warming physics are wrong in climate models?

In addition to overcooking the sea surface temperature data during the hiatus, it appears that NOAA has overcooked the warming rate of their sea surface temperature data since 1952 as well by a noticeable amount.

You may want to argue that the difference in trends (0.022 Deg C/Decade) is small. My responses: (1) It shows yet another unjustifiable tweak by NOAA, and (2) the HADNMAT2-ERSST.v4 difference contradicts physics.

And just in case you’re wondering, I also performed multiple regression analyses on the modeled marine air temperature-sea surface temperature anomaly difference, using the CMIP5 multi-model mean and a couple of individual models from that archive. The solar cycle components were smaller than the ENSO and volcanic aerosol factors, as they should be.  But even more curious, they all showed a solar cycle component that was the opposite sign of the one based on the HADNMAT2-ERSST.v4 difference, meaning the solar cycle in the models decreased, not increased, the temperature difference between the marine air and sea surface temperatures during the hiatus. I’ll present those results in a future post.

Thanks, Grant. As soon as I saw your HADNMAT2-ERSST.v4 difference graph, I knew there was another problem with the ERSST.v4 adjustments.

BACKSTORY – THE EXCHANGE

  1. Grant Foster didn’t like my descriptions of the new NOAA ERSST.v4-based global surface temperature products in my post Both NOAA and GISS Have Switched to NOAA’s Overcooked “Pause-Busting” Sea Surface Temperature Data for Their Global Temperature Products. So he complained about them in his post New GISS data (archived here). And Grant Foster didn’t like that I presented the revised UAH lower troposphere data in a positive light.
  2. I responded to Grant Foster’s complaints with the post Fundamental Differences between the NOAA and UAH Global Temperature Updates.
  3. Obviously, Grant Foster missed the fact the focus of the discussion is the hiatus, because he replied with the post Fundamental Differences between Bob Tisdale and Reality (archived here), which also presented data leading up to the hiatus.
  4. I reminded Grant the topic of discussion was the hiatus in my response Tamino (Grant Foster) is Back at His Old Tricks…That Everyone (But His Followers) Can See Through.
  5. Grant Foster responded to that with The Bob, which is the subject of this post.

CLOSING

Over a month ago, we had discussed one of Miriam O’Brien’s arguments, which Grant Foster quoted and found noteworthy, and we determined the logic behind it to be flawed.

If the relative weightings of the independent variables in Grant Foster’s model are as physically skewed as those found with the multiple regression analysis I performed, then his modeled trend during the hiatus is meaningless. And looking back at Figure 9, the only way to accomplish the trend found by Grant Foster is to grossly exaggerate the impacts of the solar cycle in the model.

One of Grant Foster’s closing paragraphs reads:

I expect The Bob will post about this again. I expect he’ll repeat himself again. After all, the trend in NMAT is lower than that in ERSSTv4 since 1998, which can’t possibly have anything to do with el Niño or atmospheric aerosols or solar variations because that’s the time of the non-existent “hiatus”.

As discussed in this post, I had already considered and illustrated how El Niño events had skewed the HADNMAT2-ERSST.v4 difference. So yes, I repeated myself, but those repetitions contradicted his claims and his quote from Miriam O’Brien.  But then Grant Foster introduced another topic…his model.  And as I showed in my Figure 5, Grant Foster was focused on the wrong El Niño. We showed in Figure 9 that the stratospheric aerosols were a non-factor in the HADNMAT2-ERSST.v4 difference. That leaves the solar variations. If the weighting of Grant Foster’s sunspot data is as skewed as I found with my regression analysis, we can dismiss his model.

Also, because of the reminder from Miriam O’Brien (Sou at HotWhopper), I’ve shown that a disparity in trends similar to the HADNMAT2-ERSST.v4 difference also exists between the UKMO HADSST3 data and the NOAA ERSST.v4, both of which are ship-buoy-bias-adjusted sea surface temperature datasets, for the period of 1998 to 2010, Figure 10.

Last, the climate models used by the IPCC indicate, globally, marine air should be warming faster than sea surfaces since the early 1950s, which is the opposite of the relationship between the new overcooked NOAA sea surface temperature data and the night marine air temperature data NOAA used for bias adjustments.

Next in this series will be a more detailed look at the long-term data. The working title is, Did NOAA Destroy a Perfectly Good Sea Surface Temperature Reconstruction with the Latest Upgrade?

About Bob Tisdale

Research interest: the long-term aftereffects of El Niño and La Nina events on global sea surface temperature and ocean heat content. Author of the ebook Who Turned on the Heat? and regular contributor at WattsUpWithThat.
This entry was posted in Tamino, The Halt In Global Warming, The Pause. Bookmark the permalink.

5 Responses to Yet Even More Nonsense from Grant Foster (Tamino) et al. on the Bias Adjustments in the New NOAA Pause-Buster Sea Surface Temperature Dataset

  1. Pingback: Yet Even More Nonsense from Grant Foster (Tamino) et al. on the Bias Adjustments in the New NOAA Pause-Buster Sea Surface Temperature Dataset - Perot Report

  2. Roger Hird says:

    I’m consistently impressed by what you write but I have one modest query.
    You mention using Excel for one of your analyses. I’m sure you know what you are doing but 15 years ago I was involved – as an adminstrator, not an expert – in the NPL programme of work on Software for Metrology (Metrology – not Meteorology). In that programme we looked very hard at the use of Excel for scientific/statistical work and found that some of its statistical algorithms were dangerously inaccurate. We were mainly looking at collections of numbers with many significant figures and varying only in the last few (ie ill-conditioned data) but even with more straightforward numbers results were sometimes surprisingly erroneous. We warned people using Excel of the pitfalls,suggested at least checking the results using reference data sets, recommended other products and even got NAG to produce some dependable plug-ins to make Excel more reliable. It may be that the data you are using don’t give rise to such problems or that MS have improved Excel but I just thought I’d mention it.
    RogerH

  3. omanuel says:

    The 2009 Climategate emails and six years of official excuses for deceit by Hansen, Foster et al. have exposed, and may finally end, THE GREAT SOCIAL EXPERIMENT OF 1945-2015 [1].

    The experiment was triggered by Aston’s warning about nuclear energy on 12 Dec 1945, and by unreported CHAOS & FEAR in Aug-Sept 1945 [2].

    See:

    1. THE GREAT SOCIAL EXPERIMENT OF 1945-2015 https://dl.dropboxusercontent.com/u/10640850/Social_Experiment.pdf

    2. Aston’s WARNING (12 Dec 1922); CHAOS and FEAR (Aug-Sept 1945) https://dl.dropboxusercontent.com/u/10640850/CHAOS_and_FEAR_August_1945.pdf

  4. Bob Tisdale says:

    Roger, thanks for the note. Basically, the only statistical analyses I use EXCEL for are linear trends, and those are always spot on or a tick away from the trends reported by others. The tiny differences could simply be caused by different base years for anomalies. The multiple linear regression analysis in this post was performed by an add-on software package.

    Hopefully, in the 15 years since you studied EXCEL, they’ve cured the other problems. One would hope.

    Cheers.

  5. Henry P says:

    Using excel is fine. I am impressed with your analysis.1998 is in fact a significant year as according to almost all data sets, including my own, plus or minus 1-2 years, this is when earth reached its maximum output. From then we started to cool.
    Contrary to popular opinion I find there is no pause or “hiatus”.
    It is either warming or cooling> there is no middle way.
    Note my summary of minimum temperatures from 1973

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s