This is a discussion of the criticisms by the blogger Tamino about a couple of my recent posts. Tamino’s unjustified complaints were about my graphs of the divergence between the National Oceanographic Data Center (NODC) ARGO-era (2003 to present) Global Ocean Heat Content (OHC) data for the depths of 0 to 700 meters and the Goddard Institute for Space Studies (GISS) climate model projections/predictions for that Global OHC data.
This is a long post, almost 6,000 words. So I’ve included a summary at the beginning of this post, immediately after the introduction. Readers can then continue to read the rest if they chose. The headings of discussions are bold faced and many of the illustrations are annotated so they can scroll down through the headings to find a topic, if they have questions about specifics.
Not surprisingly, Tamino has one again disagreed with something I presented in a couple of my posts and has attempted to dispute it. This time he has responded to my recent First-Quarter 2011 Update Of NODC Ocean Heat Content (0-700Meters) post that was cross posted at WUWT as The GISS divergence problem: Ocean Heat Content. There Anthony Watts provided an introduction.
Tamino’s disciples were much impressed with his presentation and added their OpenMind-prompted beliefs to the WattsUpWithThat thread. Please take a few moments to read Tamino’s response Favorite Denier Tricks, or How to Hide the Incline.
Let’s see where Tamino misses the mark this time.
Tamino failed in his efforts to discredit me, my simple model-data comparison graphs of Global Ocean Heat Content, and the posts that include those graphs.
Tamino failed to prove the start year of 2003 was cherry picked to provide the lowest trend. I first started posting those model-data comparison graphs with the earlier version of the OHC data. With that earlier version, 2003 did not provide the lowest trend, as it does now. So my first uses of 2003 as the start year for those graphs were not dependant on 2003 being the year that provided the lowest trend. NODC corrected and revised their OHC data in October 2010. Since that NODC update, 2003 has produced a low trend. On one hand, Tamino may not have known about the NODC’s October 2010 changes to the OHC data, but he should read a post in its entirety before accusing someone of using data manipulation tricks. In the more recent of my posts that Tamino had referred to, I had noted that there had been recent changes to the data and I provided links to the source and to my past posts that discussed those changes. So, on the other hand, Tamino also may actually have known about those changes to the NODC OHC data and ignored their impacts.
Tamino failed in that effort also because he chose not to believe what I had written, which was that I had used the start year of 2003 since that was the year ARGO observations became the dominant source of OHC data observations. I had other reasons that had gone unwritten in my two recent posts. One was obvious: the data has been flat since 2003. That fact is tough to miss. The other may not have been obvious: the continued use of 2003 allowed the start date to remain consistent with the same model-data comparison graphs in earlier posts at my blog and consistent with discussions at Roger PIelke Sr.’s website.
Tamino failed to prove that I had misrepresented the GISS model trends, which he had described as, “a blatant falsification of what the GISS prediction is.” First, he did not present the GISS prediction in his graphs; he shifted subjects so quickly that many of his readers may not have noticed. And based on the comical choice of words used by Tamino’s disciples in their comments on the WUWT thread (misuse, misleading, dishonestly, etc.), I have to believe that that was the case. Specifically, Tamino changed from a discussion of model trends to a discussion of observational data trends during a warming period beforethe ARGO era. Second, Tamino then attempted to illustrate the point at which that data-based (not model-based) trend intersects with the ARGO-era data as the “honest method,” but since he wasn’t using model-based trends, his efforts were for naught. Third, his “honest method” did not consider the differences between a model-based trend and the data-based trend that Tamino chose to present. The point at which the model-based trend intersects with the ARGO-era OHC data is impacted by the revision level of the data and by the base years that GISS elects to use in their presentations of the models.
I discuss and illustrate all of those failures in Tamino’s post in the following. I’ve even tacked on an additional discussion after discovering another reference to my OHC posts in Tamino’s follow-up post Five Years.
This is the dataset introduction that appears in the most recent of the posts that Tamino referred to. It was the one cross posted at WUWT on Sunday, May 8, 2011.
The NODC OHC dataset is based on the Levitus et al (2009) paper “Global ocean heat content(1955-2008) in light of recent instrumentation problems”, Geophysical Research Letters. Refer to Manuscript. It was revised in 2010 as noted in the October 18, 2010 post Update And Changes To NODC Ocean Heat Content Data. As described in the NODC’s explanation of ocean heat content (OHC) data changes, the changes result from “data additions and data quality control,” from a switch in base climatology, and from revised Expendable Bathythermograph (XBT) bias calculations.
I POSTED GRAPHS OF QUARTERLY DATA BUT TAMINO’S GRAPHS ARE OF ANNUAL DATA FROM AN EARLIER POST
Readers who are observant will have noted that Tamino has shifted the presentation of the data from quarterly to annual. This discussion is provided simply to reduce any confusion that may have caused.
Tamino writes as an introduction:
WUWT has a post by Bob Tisdale, based on one of Tisdale’s own posts. The theme is that ocean heat content (OHC) hasn’t risen as fast as GISS model projections. Watts even says “we have a GISS miss by a country mile.” But Tisdale can only support his claim by using tricks to hide the incline. In fact he uses two of the favorite tricks of deniers. One is a clever, but hardly new, trick called “cherry picking.” The other is ridiculously simple: misrepresentation.
My most recent OHC post First-Quarter 2011 Update Of NODC Ocean Heat Content (0-700Meters) is a very simple post that advises readers that the NODC has posted its 1st quarter 2011 OHC data. Anthony Watts wrote a brief introduction and cross posted it at WUWT. My “First-Quarter post” is not based on the older post, ARGO-Era NODC Ocean Heat Content Data (0-700 Meters) Through December 2010,which Tamino cites; it is a separate post. I referred to the “ARGO-era post” in the “First-Quarter post”, but it is not based on the “ARGO-era post”. One very obvious difference: in the “First-Quarter post”, the model-data comparison was presented on a quarterly basis. Refer to Figure 1.
But the data in the graph that Tamino elected to discuss was presented annually. It’s Figure 2 from my “ARGO-era post”, which I’ve included here as Figure 2.
It must have been easier for Tamino to use annual data for the rest of his failed critique. So I’ll use the annual data throughout the rest of this discussion so that the graphs and discussions agree with Tamino’s post and his graphs.
OPENING NOTES ABOUT THE GRAPHS
Figures 1 and 2 are simple graphs. Starting in 2003, they show the projections of GISS climate model outputs with global ocean heat content rising at a rate of 0.7*10^22 Joules, and they show the observed variations in global ocean heat content data as determined by the NODC. One graph presents the data on an annual basis, and the other, on a quarterly basis, which is the period chosen by the NODC for the delivery of their OHC product. I’ve had EXCEL determine the linear trends for the observations and provide the corresponding equations. Based on those linear trends, the quarterly data, Figure 1, shows that Global OHC is rising at a rate of 0.077*10^22 Joules per year, and the annual data, Figure 2, shows a rate of 0.05*10^22 Joules per year. Since Tamino chose to present annual data, let’s discuss it. The GISS projection is rising at a rate that’s about 14 times higher that the observed rate, or the observations are rising at a rate that’s approximately 7% of the rise projected by GISS.
In the “First quarter post”, I wrote about the graph that appears here as Figure 1:
Looking at the NODC OHC data during the ARGO era (2003 to present), Figure , the uptick was nowhere close to what would be required to bring the Global Ocean Heat Content back into line with GISS projections.
There was nothing misleading in that statement. And in the “ARGO-era post”, I first discussed why I was lowering the GISS projection from 0.98*10^22 Joules per year to 0.7*10^22 Joules per year, and the sources of both projections. I wrote about Figure 2:
The GISS projection of 0.7*10^22 Joules per year dwarfs the linear trend of the ARGO-era NODC OHC data. No surprise there.
There was no surprise for me or for those who have read my earlier OHC posts that have included similar graphs, since I’ve been posting the OHC model-data comparisons since October 2009.
I did not state that these graphs falsified the models. Eight years of data is way too short for that. In his introduction of the most recent post, Anthony Watts did not state the graphs falsified the models. Yet the appearance of the graphs in the posts prompted Tamino and his followers to characterize those graphs with terms such as…
CHERRY PICKING AND MISREPRESENTATION?
In his opening salvo, Tamino accused me of cherry picking and misrepresenting the Ocean Heat Content data. He apparently doesn’t believe the basis for the start year of 2003 or understand the short history of my graph that compares the GISS climate model projections and the OHC data. And his accusation of misrepresentation is unfounded as we will see.
TAMINO’S ACCUSATION OF CHERRY PICKING
On cherry picking, Tamino writes and includes a quote from my “ARGO era” post:
Why does Tisdale give such a different impression? First let’s expose the cherry-picking part. To make it look as though observation is out of whack with prediction, Tisdale starts with 2003. His justification is to call this the “Argo-era,” which he claims he chose because
According to it, ARGO floats have been in use since the early 1990s, but they had very limited use until the late 1990s. ARGO use began to rise then, and in 2003, ARGO-based temperature readings at depth became dominant. Based on that, I’ll use January 2003 as the start month for the “ARGO-era” in this post.
I don’t believe him.
The fact is, I needed a start date for that post about ARGO-era data, a post that illustrated much more than the model-data graph. By 2003, ARGO buoys provided a significant contribution to the observations used in the calculation of Global OHC. The use of the word dominant, looking back at the “ARGO-ear post”, was an exaggeration. ARGO floats provided a significant contribution by 2003, not only by the number of samples, but by greatly increasing the spatial coverage of Southern Hemisphere waters.
Back to the discussion of cherry picking…
I explained why I selected 2003, and Tamino wrote, “I don’t believe him.” Tamino elected not to believe. His beliefs are his choice and they are not evidence of cherry picking on my part.
Tamino attempted to reinforce his belief by showing that 2003 would have had the lowest trend. I’ll agree with one point: a trend from 2003 to 2010 as the data currently existsdoes have a lower trend than trends that run from 2002 to 2010 or from 2004 to 2010, but…
2003 DIDN’T ALWAYS PROVIDE THE LOWEST TREND FOR A SHORT-TERM OHC GRAPH
In the “First-Quarter 2011 Update” post, I included an introduction to the NODC OHC dataset. In part, it reads:
It [the NODC OHC data] was revised in 2010 as noted in the October 18, 2010 post Update And Changes To NODC Ocean Heat Content Data. As described in the NODC’s explanation of ocean heat content (OHC) data changes, the changes result from “data additions and data quality control,” from a switch in base climatology, and from revised Expendable Bathythermograph (XBT) bias calculations.
The 2010 update and changes had a significant impact on the short-term, ARGO-era OHC data. Figure 3 illustrates the 2009 version of the NODC OHC data and the 2009 version with the 2010 revisions. Both start in 2003 and have the 2003 values zeroed to help show the differences during the ARGO era. As described above, I started presenting the graph of OHC data versus GISS model projection back in 2009. The 2009 version of the Levitus et al data would clearly have had a negative trend if 2004 was selected as the base year, so 2003 would NOT have been the “cherry year” for that version.
Based on what has been presented so far, Tamino has not proven his claim that I had cherry picked the start year of 2003, basically because it wasn’t the ideal year to start a trend (one that contradicts the models) when I had first started presenting those OHC model-data comparisons.
Note: Another of the basic intents of presenting the data with the start year of 2003 is to show how flat the data has been since then. I’m not sure why that’s so difficult to grasp. There was a significant rise in Global OHC from 2001 to 2003, Figure 4, and since then, the OHC data has been reasonably flat, far short of the linear trend projected by GISS. And as illustrated in the Update And Changes To NODC Ocean Heat Content Dataand the “ARGO-era post”, the flattening is primarily the result of the significant decreases in North Atlantic and South Pacific OHC.
Using 2003 as a start year for my “ARGO-era post” also allowed that post to remain consistent with past OHC posts at my blog and with posts by Roger Pielke, Sr.
ROGER PIELKE, SR’s LITMUS TEST FOR GLOBAL WARMING
Since 2007, Roger Pielke Sr. has been recommending that OHC be used as A Litmus Test For Global Warming – A Much Overdue Requirementand recommending that OHC model projections be compared to OHC observations. In that 2007 post, he recommended that the comparison be communicated each year if not more often. He used 2003 as the start date for his “litmus test”. Roger Pielke Sr. discussed the subject again in his February 9, 2009 post Update On A Comparison Of Upper Ocean Heat Content Changes With The GISS Model Predictions. In it, he compared annual observation values to GISS projections, starting in 2003. Those projections were based on the response by James Hansen of GISS. Pielke Sr. concludes that post with:
While the time period for this descrepancy with the GISS model is relatively short, the question should be asked as to the number of years required to reject this model as having global warming predictive skill, if this large difference between the observations and the GISS model persists.
And through 2010, the “large difference between the observations and the GISS model” has persisted. To avoid the controversy in the future, maybe I simply need to add a note to the graph, one that reads to the effect of “If ARGO-Era OHC Observations Continue To Run Far Below Model Projections, How Many Years Are Needed To Reject The Models?”
Since no one else was illustrating the difference between OHC observations and the GISS model projections on a regular basis, I began including the graph in many of my OHC posts. I believe my October 16, 2009 post NODC Ocean Heat Content (0-700 Meters) Versus GISS Projections (Corrected) was my first OHC post to include it. Shortly after that, I went into great detail to illustrate and discuss Why OHC Observations (0-700m) Are Diverging From GISS Projections.
I ACTUALLY LOWERED THE GISS PROJECTION RECENTLY
In the “ARGO-era post”, I lowered the GISS projection from 0.98*10^22 Joules per year (which was based on Pielke Sr’s discussion of the Hansen response) to 0.7*10^22 Joules per year, so that the projections would fall in line with the recent RealClimate model-data comparisons. I wrote:
In past posts, when I’ve compared the NODC Global Ocean Heat Content to GISS projections, I’ve used the rate of 0.98*10^22 Joules per year for the GISS projection. This value was based on Roger Pielke Sr’s February 2009 post Update On A Comparison Of Upper Ocean Heat Content Changes With The GISS Model Predictions. The recent RealClimate posts Updates to model-data comparisons and 2010 updates to model-data comparisons have presented the projections based on Gavin Schmidt extending a linear trend of the GISS Model-ER simulations past 2003. The linear trends in both graphs are approximately 0.7*10^22 Joules per year. I’ll use this value in the comparison, but first a few more notes.
I used the 0.7*10^22 Joules per year trend again in my “First-Quarter 2011 Update” post (that’s the one that initiated the Tamino response), but I’m having second thoughts now. The difference between the RealClimate value and the “Hansen response/Pielke post” value of 0.98*10^22 Joules per year is curious, and will be the subject of a future post.
TAMINO FORGETS THE BASICS
In his post, Tamino writes:
Now let’s look at the misrepresentation — specifically a blatant falsification of what the GISS prediction is. I don’t know exactly what the GISS model prediction for OHCA is, neither does Tisdale, he just “eyeballed” it from the RealClimate graph…
Eyeballed? Reading a graph is a simple task one learns in grammar school. In my “ARGO-era post” I provided links to the RealClimate posts that compared model projections to observations. Here they are again: Updates to model-data comparisons and 2010 updates to model-data comparisons. They were the basis for the model projections I’ve used. Tamino also included the OHC comparison graph from the 2010 RealClimate update in his post and characterized it as, “an honestcomparison of these observations with prediction…” In Figure 5, I’ve thrown a few notes on the 2010 RealClimate graph to remind those who have forgotten how to read a graph. I hope I don’t have to provide a more detailed discussion than what’s shown on Figure 5. The result, as shown, is the linear extrapolation of the climate model ensemble mean has a trend of approximately 0.7*10^22 Joules per year.
THE CLAIMED MISREPRESENTATION
I stopped the Tamino quote above in mid-paragraph. Here it is in its entirety:
Now let’s look at the misrepresentation — specifically a blatant falsification of what the GISS prediction is. I don’t know exactly what the GISS model prediction for OHCA is, neither does Tisdale, he just “eyeballed” it from the RealClimate graph. But let’s look at what the prediction would be for a simple linear extrapolation. The RealClimate trend line starts about 1993, so let’s take the data from 1993 through 2002 and fit a straight line, then extend that line as a prediction through 2010. We’ll call it “prediction by extrapolation.” It guarantees that our prediction line will have the correct slope and intercept to match a true continuation of the trend. And it gives this:
If you weren’t paying attention, you may not have noticed what Tamino just did. Tamino switched from a discussion of the GISS model prediction to a discussion of the linear trend line of the OHC “data from 1993 through 2002”. I presented the Model Projection (prediction) in my post, and Tamino presented the linear trend of the OHC data(current version) in his. They are not the same.
Tamino’s first trend graph sparked my curiosity about a few things. The linear trend of the OHC data (current version) for the period Tamino elected to show (1993-2002) is about 0.58*10^22 Joules per year, which is below the model prediction of 0.7*10^22 Joules per year. Refer to Figure 6. And for comparison purposes, I’ve also included the data for an older version of the Levitus et al OHC data. The older data is still available through the NODC website at their Heat content 2004webpage. Not surprisingly (since the models would have been initially compared to earlier versions of the OHC data and tuned accordingly), the linear trend of the older OHC data (approximately 0.67*10^22 Joules per year) runs closer to the model prediction.
So far, I have not misrepresented the linear trend of the GISS model projection/prediction in any way. I also have not misrepresented the Levitus et al OHC data. Tamino’s claim of misrepresentation must come from something else. Maybe it’s the appearance of the graph?
WHERE THE MODEL PROJECTION INTERSECTS WITH THE OHC DATA
In his final three paragraphs, Tamino writes:
But Tisdale didn’t do that. He chose a slope to match his “eyeball” estimate of the trend line in the RealClimate graph, but chose the interceptto match 2003. He even states “Note that I’ve shifted the data down so that it starts at zero in 2003.” Let’s call that the “Tisdale method” and compare it to the honest method when extrapolating the trend line:
Sorry, Bob. When you try to match a line’s slope, but then shift that line upward, choosing the intercept deliberately to make the prediction look as bad as possible, that’s dishonest.
It’s also one of the most common tricks that many denialists have used to “hide the incline.” That, and cherry-picking, just might be their favorites.
I’ve included Tamino’s graph that includes the “Tisdale method” as Figure 7.
Apparently, Tamino believes that a comparison of the GISS model projection that intersects the OHC data midway between 2003 and 2010 would better represent the comparison. Refer to Figure 8. The linear trend of the model projection is still about 14 times higher than the linear trend of the ARGO-era (2003-2010) OHC observations.
Let’s take a look at a visual comparison of the graph Tamino finds offensive (Figure 2) and a graph that Tamino might not find offensive (Figure 10). Animation 1 is a .gif animation that shows the comparison graphs of the GISS Model Projection versus ARGO-era OHC Observations:
1. with the Ocean Heat Content Data and GISS Model Projection zeroed at 2003, and
2. with the GISS Model Projection Intersecting With The Data Midway Between 2003 and 2010
Both show that the GISS Model Projection is about 14 times higher than the NODC Global Ocean Heat Content Data.
THE “FIT” OF THE MODEL WITH OBSERVATIONS, OF COURSE, DEPENDS ON THE REV. LEVEL OF THE DATA AND ON THE BASE YEARS
This is a discussion of the model projection/prediction, not the linear trend of the data from 1993 to 2002 that was used by Tamino.
Figure 9 is the comparison of the 2009 version of the NODC OHC data and the GISS Model–ER from the RealClimate post Updates to model-data comparisons, Gavin Schmidt of GISS notes the following about the base years he used for the model data:
Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA.
He further explains his baseline for the model data in his reply to blogger Chad. Refer to comment 188 and the reply at 29 Dec 2009 at 10:19 PM. With respect to OHC, his reply reads:
…for ocean heat content it is more important and I plotted the drift corrected values in the second figure. You still need to baseline things (as I did in figure 1, following IPCC), but I’m still not sure what the OHC data are anomalies with regard to, and so I haven’t done any more processing for that. As it stands the spread in the OHC numbers is related to absolute differences in total heat content over the 20th C – if you just wanted the change in heat content since the 1960s or something, the figure would be a little different.
In other words, the base years for the GISS model in Figure 9 were established by a complicated method. And if you were to read the Levituset al (2009), you’d discover that Gavin Schmidt is correct, determining what they had used for a climatology in that version was confusing. Note also that the presentation of the data in Figure 9 runs from 1955, the start of the NODC OHC dataset. The climate model is identified as the coupled GISS Model ER, with the “R” standing for Russell ocean.
In October 2010, the NODC revised and corrected its Ocean Heat Content data. As mentioned earlier, I discussed those changes in the post Update And Changes To NODC Ocean Heat Content Data. In addition to the changes to the ARGO-era data shown in Figure 3, the revisions and corrections lowered the overall global OHC trend by approximately 9%. That was a sizeable decrease, with most of it occurring in the Southern Hemisphere. If you were to compare the NODC OHC data in both of the RealClimate model-data updates, Figures 9 and 10, you’d notice they’re different (because of the corrections to the data between the two RealClimate posts).
Figure 10 is a similar comparison from the 2010 updates to model-data comparisons post at RealClimate. For it, Gavin Schmidt writes:
I am baselining all curves to the period 1975-1989, and using the 1993-2003 period to match the observational data sources a little more consistently.
You’ll note that the model ensemble members are more closely grouped in this presentation. In other words, the span of the ensemble members during the period of 1975-1989 is much smaller in the 2010 update than it was in the 2009 update. RealClimate has also excluded the data before 1970 in the 2010 update. It’s a cleaner presentation, even with addition of the Lyman et al (2010) data.
So far RealClimate has presented the OHC data and model outputs two ways, using different base years. Recall that between those two RealClimate posts, the NODC revised and corrected its OHC data. Now note where the linear extrapolations from the model means intersect the data in both RealClimate graphs. In Figure 9, it’s much closer to 2010 than in Figure 10. That should be due primarily to the significant revisions and corrections to the observations.
Figure 11 is yet another GISS model-data comparison. It is from a 2008 presentation by Gavin Schmidt of GISS. The graph can be found on page 8 of the .pdf file GISS ModelE: MAP Objectives and Results. I provided a link to this presentation in the “ARGO-era post,” for “those who might be concerned that extending the linear trend does not represent the actual model simulations.” One difference with this graph is the addition of the coupled GISS Model-EH, where “H” represents the HYCOM ocean model. The NODC OHC data has the hump from the 1970s to the 1980s, and based on the timing of this presentation, it should be the NODC OHC data based on Levitus et al 2005, linked earlier. That dataset ended in 2003, so Gavin Schmidt has tacked on a few more years of data. Notice the dashed lines from 2003 to 2004. A significant difference with this graph is the units. All of the data in this post so far has been presented in terms of 10^22 Joules. The units in Figure 11 are watt-years per square meter.
I’ve highlighted the 2003 OHC observation and the base years of 1955 to 1970. Why did Gavin Schmidt use 1955 to 1970? Using those base years for the models and the data allowed him to show that the two models “bracketed” the observations. Refer to his note at the bottom of the slide. But for the graph in Figure 10, he was “baselining all curves to the period 1975-1989, and using the 1993-2003 period to match the observational data sources a little more consistently.” So it’s apparently acceptable practice by climate scientists to adjust the data as one sees fit to present the effect one wishes to illustrate. It could be to bracket the observations or to “match” the observations.
In my simple model-data graphs, I elected to show the model projection intersecting at the beginning of the ARGO-era data instead of intersecting with it elsewhere. It was my choice. But let’s consider something else.
Notice also how the ensemble mean for the GISS Model-ER data LEADS the observations at 2003 in Figure 11. As noted earlier, the older version of the NODC Global OHC data (0-700meters) on an annual basis is still available through their website (older ), and, of course, so is the current version (current). We can change the base years of both versions to 1955-1970, the same base years used by Gavin Schmidt in his presentation and then plot both datasets. Refer to Figure 12. With those base years, would the GISS Model-ER data have intersected with the current version of the NODC OHC data during 2003 to 2010? No. In 2003, the older version of the OHC data lags the model data and the current version of the data lags the older version.
What can we conclude from this part of the discussion? The point at which the GISS model mean or its linear extrapolation intersects with the global OHC data depends on the version of the data and on the base years selected by those presenting the data, which depends on what the presenter wants to show. It also illustrates that my starting the GISS Model data at 2003 does not misrepresent the GISS projection.
Some readers might describe Tamino’s post as smoke and mirrors.
SPEAKING OF SMOKE AND MiRRORS
A last minute addition to the post: I just discovered Tamino’s follow-up post Five Years.
In fact I have a prediction: that Bob Tisdale will deny he meant what he meant with his deceptive graph tricks, instead he’ll plead that he was just talking about the “trend” since 2003. Yeah … since 2003.
It’s all smoke and mirrors.
No. I haven’t lost sight of the fact that the graphs that Tamino finds so offensive show the observations have been relatively flat since 2003, a period I have described as the ARGO era. And since the model projection does not flatten, the observations are diverging from the GISS Model Projection. We can illustrate this another way. We can subtract the observations from the Model projections, Figure 13. Because the observations are so flat during that period, we can show that the difference between the model projections and observations are growing almost as fast as the model projections.
Tamino then discusses why he is smoothing the datasets with 5-year time spans. Later, in his reply to a blogger’s comment at May 10, 2011 at 5:16 am, Tamino describes how he’s smoothed the data:
[T]he data points are successive non-overlapping 5-year means — about as simple as it gets. The smoothed curves are a lowess smooth of the original data.
Tamino also throws in another remark that refers to Anthony Watts and me while he’s discussing his Ocean Heat Content graph:
Let Bob Tisdale and Anthony Watts focus on too-too-short time scales — when you look at the big picture, again the trend is clear. Upward.
For those who are trying to figure out what Tamino has done to the data in those graphs, let me explain it in more detail. With the Ocean Heat Content anomalies, he’s averaged the data from 1955 to 1959 and shown it as a 1957 data point. The next data point is five years later, 1962, and it represents the average of the OHC data from 1960 to 1964, and so on. And between the 5-year data points, there are straight lines. I’ve reproduced Tamino’s 5-year span filter in Figure 14, and added the original OHC data. I’ve also highlighted the years with the data points. As noted on the graph, Tamino’s method samples 5-year averages on 5-year intervals. But don’t the 5-year averages of the years between those 5-year intervals have any significance? Why not sample those as well? Why not utilize a more commonly used smoothing method: a 5-year running-mean (running-average) filter? Tamino has used running-mean filters in earlier posts. GISS uses a 5-year running-mean in their presentation of annual data on their Graphs webpage.
Why didn’t Tamino present the data smoothed with the more commonly used 5-year running-average filter? Because the data that’s been smoothed with a 5-year running-average filter, as shown in Figure 15, flattens in recent years.
The Ocean Heat Content data is not a noisy as the other datasets Tamino presented in that post, so he probably could have used a 3-year running-mean filter, Figure 16. But that would have extended the relatively flat period back to 2003.
Tamino’s graphs show what he wants to show. My graphs show what I want to show. As Richard M wrote in his May 10, 2011 at 4:06 pmcomment on the WUWT thread, “Looks to me like this debate is much ado about nothing. Both views are reasonable approaches. Neither one is clearly right or wrong, they are just different ways of looking at the data.” As far as I’m concerned, that comment is applicable to Tamino’s “Five years post”, too.
A TOPIC FOR A FUTURE POST
I had wanted to discuss the difference between the two GISS projections. For the last two OHC posts, I have used the projection trend that’s illustrated in the RealClimate model-data posts of 0.7*10^22 Joules per year. Before that I had used the trend of 0.98*10^22 Joules per year from the Hansen response and Pielke Sr. post. But this post is much too long to start a new discussion, so I’ll save it for a future post.
I will, however, show both model-projection trends in a final model-data comparison graph, Figure 17. Note the question I’ve added to it. It implies that I understand the period is too short to disprove the climate models, but it also reinforces that observations are rising at a rate that is significantly less than model projections during the ARGO era.
I abstained from responding to the unwarranted comments from Tamino’s disciples on the The GISS divergence problem: Ocean Heat Content thread at WUWT. I felt it was more important to document and illustrate where Tamino’s critique failed. But many persons did take the time to reply to Tamino’s followers, so to them, I’d like to say thanks.
Tamino can’t touch you with a ten foot pole. Your rebuttal to his acquisations is thorough and addresses all issues he weakly and ignorantly attempted to tar you with. He never put a spot on you.
I’m a little befuddled about why you even felt you had to respond to him and his cat calls from below the peanut gallery. But I’ve come to have a high respect for your judgement and I’ll chalk it up to “your call” and “why not”. Have a great day!
Pascvaks says: “I’m a little befuddled about why you even felt you had to respond to him and his cat calls from below the peanut gallery.”
I would have had to respond to the peanut gallery each time I presented the graph in the future. This way it’s over and done with–well, at least, one would think. But who knows?
And thanks for the kind words.
Great post! I’d like to add a comment I tried to submit (as comment #2) to the “Five Years” thread, but which was deleted silently(*) by him. I didn’t want to go into a discussion about his deceptive graphing methods (thank you very much for that effort!), but instead wanted to see if he would let this through… and of course, he wouldn’t, because it’s so revealing, isn’t it? A very short time period of warming is used as the “key finding” in a high-profile research report – so much for “the tricks deniers use”!
Espen | May 10, 2011 at 7:04 am | Reply
Your comment is awaiting moderation.
This is not limited to “deniers”. See “Key finding 1” in the executive summary of the SWIPA report from AMAP: The past six years (2005–2010) have been the warmest period ever recorded in the Arctic. Higher surface air temperatures are driving changes in the cryosphere.
(*) Apparently he blacklisted me after I tried to discuss his silly use of Bayes Theorem to replace one difficult question with another difficult question.
Is the trend from 2003 to 2010 (of the data) that you graph in the first chart statistically significant?
Why are you comparing a model trend that you computed from 2000 to a a data trend that starts in 2003?
Is the model trend over that period statistically significant?
I suggest that you add a few general words about OHC and why it is appropriate to compare the observed annual increase with the model projections for annual increase.
My, perhaps oversimplified understanding, is that since the heat capacity of the ocean far exceeds that of the atmosphere and land, that the annual change in OHC is effectively a record of the average imbalance of globally averaged incoming radiation and globally averaged outgoing radiation.
This means that short term (quarterly or annual) measurements of OHC do indeed have significance. So showing the start point of the “GISS projection” as 2003 makes sense, and it is not necessary to average it out over a decade or longer span.
You’ve written a 6K word response to Tamino to fight against some very valid complaints re: your original commentary as cross-posted to WUWT. Your response didn’t adequately defend against my most strident complaint, that relating to the terribly misleading Figure 2 on the WUWT cross-post (heretofore referred to as “Notorious Fig2”). I made that point here, here, and here.
A most elementary tenant of graphical presentation is that a graph must stand on its own in conveying ideas. It shouldn’t depend on external text telling people what scale applies to what graphic element; what to glean and what to ignore. In that basic respect Notorious Fig2 fails miserably, no matter what you claim. The Graph’s headings, X and Y scales, and other descriptors result in misleading impressions relative “GISS Projections” both visually and in a numeric/data-point sense. It helps spur comments like “we have a GISS miss by a country mile”. If you were to print out that graph, hand it to a technically competent person, and ask for impressions of what the graph is telling you, I guarantee they wouldn’t be limited to “oh look, the trend slopes are different”. It’s inexcusable, Bob – you don’t get to judge that “It also illustrates that my starting the GISS Model data at 2003 does not misrepresent the GISS projection”. Notorious Fig2 clearly depicts an OHC “GISS Projection” at the start of 2003 of 9.6 Joules*10^22 … that is most definitely “a blatant falsification of what the GISS prediction is.”
Figure 8 in this response matches the suggestion I made as one way you could have avoided blow-back on the ridiculous Notorious Fig2. … and, instead you could have concentrated on the cherry-picking and the too-short trend time line.
Sometimes it’s just better to take criticism on its merit, make adjustments that still convey your point, and move on … that, rather than harming your credibility by trying to defend the indefensible. You’d also waste less time in constructing defensive blog responses
KenM says: “Is the trend from 2003 to 2010 (of the data) that you graph in the first chart statistically significant?”
It’s significant for that period. I’m not implying that it could be extended into the past or the future,
You asked, “Why are you comparing a model trend that you computed from 2000 to a a data trend that starts in 2003?”
The “model trend” is actually based on a trend analysis that GISS performed of the model mean for the period of 1993 to 2002. They then extended that linear trend through to 2010.
You asked, “Is the model trend over that period statistically significant?”
RealClimate felt it was. And if we look at an example of a model mean that runs until 2010, Figure 11, it’s a reasonably straight line from the early 1990s to 2010.
Charlie A says: “I suggest that you add a few general words about OHC and why it is appropriate to compare the observed annual increase with the model projections for annual increase.”
Agreed. For future posts I’ll have to try to remember to refer to a couple of the recent posts on OHC, like Jeff Id’s (the Air Vent). Pielke Sr. did a nice intoduction for it:
Jack Greer: I believe I have addressed all of your concerns.
Great post; and the most glaring defect with the OHC record, the 2002-2003 spike, is not addressed by Tamino; if that huge spike is removed from the record [as it should be because it cannot exist in terms of energy sources] then the OHC record is even more of an indictment of the failure of AGW and its supporters.
(reposted from another blog where bob chose to chime in).
If your analysis was genuine what you would do is:
1. Perform a separate linear regression on the post 2003 data. That would give you the line of best fit. The way you’ve done the line of best fit is invalid to the point of fraudulent.
2. Compare that line of best fit to the pre-2003 data line of best fit by:
a. establishing if the gradient of the line of best fit was statistically significantly different for your new model compared to the old one.
b. establishing if the intercept of the new line was different from the old one.
Until you’ve done this, your hypothesis has no merit and comes straight out of the old book “How to Lie with Statistics”. Which you can purchase here, although it looks like you don’t need any more tips to me.
Indeed you have, Bob … you’ve confirmed that you really don’t place a premium on insuring information is presented in an honest fashion. As you said, it was your conscious choice on how to present the information in Notorious Fig2. But sometimes people make bad choices, and are hurt more by their stubborn attempts at justification than they are by the bad choice itself.
You need the uncertainty measurements. What are the confidence intervals of the regression terms – specifically of the intercept and the gradient? Excel’s regression procedure is not just magic voodoo that you type into a computer, it’s a well understood statistical procedure called ordinary least squares (OLS) regression. Although Excel will hide it from you, the uncertainty terms are easy to estimate, using the widely recognised methodology. At present you’ve estimated the intercept by visual inspection, which is a notoriously subjective methodology, and you have no estimate of the uncertainty terms.
I’m trying to help you establish the truth of your argument here, although my substantial experience with doing OLS type analysis suggest to me that you won’t have much luck. The procedure I recommend will show whether your assumptions can be justified by an objective set of criteria. If you don’t know how to do this kind of analysis yourself, give me the data that you used, and I’ll do the analysis of the uncertainties myself and pass them back (with enough information so that you can replicate them independently yourself).
kdkd says: “You need the uncertainty measurements.”
kdkd: You’re missing the intent of the graph. It is a simple reminder that Global OHC is not rising at a rate that’s comparable to the model projections. You’re reading too much into the graph.
There are links to the data in the post. Feel free to continue the analysis and report back.
The only way you can demonstrate that OHC is not rising at a rate that’s comparable to the linear slope of the pre 2003 data (that’s what you mean by “model projections”, isn’t it?) is to show that the intercept and slope pre-2003 are statistically significantly different pre 2003 to post 2003. Anything less is conjecture.
OK, show me where exactly – point it out exactly. Don’t make me work for this, it’s your argument, you need to show that it’s valid by objective criteria. So far you haven’t done this. All we have is a description of subjective criteria, and your word.
kdkd: If you had read the post instead of looking at the graphs, you may have known where the data was linked:
OK, I won’t waste your time further, but the fact of the matter is that the regression model for 2003-2010 does not provide enough data to make a statistically significant prediction. In order to make a prediction which is statistically significant (i.e. meaningful) you need to use the data back to 1993. If the inclusion of the data from 2003 onwards causes a significant variation in slope or intercept we can then conclude that there’s something going on. However my preliminary look at the data that you used shows that using only the data from 2003 gives a misleading result as its predictive power is no better than chance due to an insufficient number of observations
I’ll have a look at whether that is the case tomorrow 🙂
kdkd: Lots of work and discussion for a simple graph that shows the rise has flattened, while the projection has not. I haven’t made any statements either way about the graph. It’s an observation.
As noted in the post, this is much ado about nothing.
“. . .the most glaring defect with the OHC record, the 2002-2003 spike, is not addressed by Tamino; if that huge spike is removed from the record [as it should be because it cannot exist in terms of energy sources] then the OHC record is even more of an indictment of the failure of AGW and its supporters.”
Careful what you wish for, Cohenite; if that spike goes away, so does the “flattening!”
No, you’re claiming that your analysis is valid. However your eyeball analysis should be confirmed by objective means if you are able to make valid strong conclusions. You appear to be stating that this is un necessary and that your conclusions stand without it. Which is incorrect.
The jury is back. By reasonable objective criteria, your subjective examination of the data is not justifiable. Details, and instructions for replication here: http://pastie.org/pastes/1901786/text
cohenite says: “…if that huge spike is removed from the record…”
cohenite, they significantly reduced the spike with the Oct 2010 corrections/revisions.
Animation is from this post:
Kevin McKinney says: “Careful what you wish for, Cohenite; if that spike goes away, so does the ‘flattening!'”
Kevin, even with revisions, the data is still reasonably flat.
kdkd: Thanks for the analysis. I’ll be sure to include a link to it and a note in my next OHC post (should be in a few weeks).
A question: Based on your experience, why does the test for the annual data that runs from 2003 to 2010 show that the regression slope and intercept are not significantly different than zero? That is, what is it about that short-term data that shows the linear model does not predict better than chance?
I recommend that you go and read a significance table for Pearson’s r. Short term data just can’t provide the precision you need in order to support your argument for this type of noisy data set. Plus as you’re looking at time series data, that causes an even greater loss of statistical power due to autocorrelation (which a term used to describe the violation of the assumption that each data point is independent of each other data point). I don’t tend to worry about autocorrelation because I use crude methods to argue against the deniers even cruder arguments.
Or to use a sporting analogy: unless you start looking at multi-decadal data sets in an attempt to support your argument, the logic of statistics keeps you on a losing wicket against a bouncing spin bowler.
kdkd says: “I recommend that you go and read a significance table for Pearson’s r. Short term data just can’t provide the precision you need in order to support your argument for this type of noisy data set.”
In other words, the term of 2003 to 2010 has only 8 data points and those 8 data points can’t provide the precision needed to support my argument.
You continued, “…unless you start looking at multi-decadal data sets in an attempt to support your argument…”
So I need more data points. The term you also compared, 1992-2002, only has 11 data points and that was enough the provide the necessary precision.
No, not really. However I was trying to conduct the argument on your terms. There comes a point (removing context and bucking against central limit theorem) when that’s no longer possible because your argument was sailing close to the wind in the first place. Even before you started preferring visual inspection over more objective methods.
kdkd: Then having 30+ data points from 2003 to present, using the quarterly data, would not help matters:
The graph is from the post that initiated the discussion:
Using quarterly data would not make sense due to the planet’s annual cycle. If you did so, without considering this, or autocorrelation, it would give the illusion of greater certainty, but that would not be justifiable in terms of the physical phenomenon under investigation.
kdkd: Thanks for the discussion. I’ll throw a link up at the WUWT version of this post for those who are interested.
An afterthought, Hansen et al (draft) was satisfied using a 6-year running-trend to illustrate the flattening of Global OHC. Refer to their Figure 13:
Click to access 20110415_EnergyImbalancePaper.pdf
kdkd: If you don’t get significant regression results, it doesn’t necessarily mean that you don’t have enough data, it may also be that there is no linear dependency in the data. In fact, it doesn’t help to have 40 years of data if it consists of OHC measurements similar to those 8 last years, you still won’t get significant regression results.
You even yourself show that a short period can have a significant slope yourself – when running linear regression on the 1992-2002 data.
And here’s proof that even 6 years can be enough to get a significant slope:
I used the 3-monthly data from 1998-2003 (inclusive) and loaded it into the R variable “ohcshort”:
somehow this was truncated in the above R transcript (first line):
Yes. If that trend continues through beyond the solar minimum there is certainly something going on. It’s important not to discard context from these kinds of analysis.
And while Bob’s busy not rejecting Hansen’s work, here’s a key quote from that paper:
kdkd: That comment from Hansen is based on a yet not (I think) published paper by Von Schuckmann et al which gives an energy balance for the Argo era which is at odds with all other accounts, it seems. I think it’s slightly amusing that you now resort to Hansen who is citing an estimate based on an even shorter time frame (2005 – 2010) than Bob used!
…in fact, given the SWIPA report I link to above, it seems like using cherry-picked ~5 year time frames has become a “favorite warmist trick” 😉
Actually if you provide context (deepest solar minimum in some time), then indicative conclusions are fine. I wouldn’t go making firm conclusions until we’re out of the solar minimum though. I’d expect the OHC estimates to start rising again, and if not, that will be a sure indicator that something interesting or odd is going on.
It’s the denier trick of plucking a number out of context because it suits the confirmation bias – 1998 in the temperature record (ignoring the El Niño of that year) is the classic; 2003 for the OHC figures while ignoring the solar minimum – which is what causes the problems with the denier argument.
If you argue in full context with the rest of the available data, I’ll start taking you guys seriously, but fiddling around the edges, wiggle watching, etc. etc. does not strengthen your argument except for credulous or the astro-turfers.
Josh Willis says the Argo network OHC down to 700M is increasing at 0.16 Watts/m2/year.
Johnson and Purkey found that there might be 0.095 Watts/m2 going into the deeper oceans (between the late 1990s and early 2000s).
Hansen says the non-ocean Earth environment is absorbing about 0.071 Watts/m2 (Land – 0.024; Ice Sheets 0.047).
So, all together the energy is accumulating at 0.326 Watts/m2. The atmosphere is accumulating none since temperatures are not increasing.
The climate models say the direct forcing should be about +1.7 Watts/m2 (today), and there should be indirect feedback forcing of another +2.2 Watts/m2 (water vapour and ice-albedo).
So, we are missing some 3.6 Watts/m2 of energy – obviously it is leaving the Earth system rather than increasing the average energy level/temperature of the Earth.
kdkd should check the statistical significance of that.
Pingback: The Blackboard » Ocean Heat Content Kerfuffle
The quarterly Levitus data are anomalies: the annual cycle has been removed. It’s perfectly easy to use ARIMA to deal with the autocorrelation. That avoids hoping the errors arising from using annual average data (which widens error bars) and neglecting autocorrelation in the annual average data (which narrows them) happen to just counterbalance.
This method may be quick. Unfortunately, it’s dirty to the point of being wrong. It’s well known that two sample with overlapping 95% confidence intervals may be statistically significant. If the standard error of two samples are equal, they can overlap quite a bit and the difference can be statistically significant.
This is a silly thing to say in context. Under the assumption you used (including iid noise), your result shows is that the trend since 2003 cannot be used to reject the null hypothesis of no trend (i.e. m=0) But you seem to be afflicted with the common but misguided notion that being unable to reject the null of m=0 means you can’t test some other null and reject it. This is incorrect. If someone has predicted m=-5*10^40 joules/year, that null could be tested and rejected. You could also reject 5*10^40 Joules/year. There is no statistical rule saying you can’t test any hypothesis until after you can reject ‘zero’.
kdkd, where did you learn your statistics, climatology school? Your “analysis” of the data AND your interpretation of it are pretty much nonsense.
Try running the following script in R:
The regression takes all of the sea level values from 1993 forward and fits two predictor variables, time1993 which is the ordinary time variable and time2003 which is zero if the time is 2003 or earlier and the number of years since 2003 if the time is later. This simultaneously fits both of the segments that you fit in your link, but were unable to make a proper statistical comparison of because you did the two fits separately.
The coefficient for time2003 represents the amount of change in the slope at 2003.
Notice that the slope changes by -. 435 at this point from .770 to .335 and the p-value for the time2003 coefficient is .00135 which is significant at the .01 level. Your conclusions appear to be pretty much out to lunch.
kdkd: “…I’ll start taking you guys seriously”
I’ll try not to be rude, this is Tisdale’s blog and not Tamino’s, after all:
Right now, you’re the one with a credibility problem here, so a little less arrogance would be appropriate, I think.
(- and thanks to Lucia and RomanM fir chiming in, my experience with time series is somewhat limited, so I tried to limit my objections to where I know I’m on firm ground)
Btw someone should have a close look at that von Schuckmann et al 2011 paper – the numbers that Hansen cites for 2005-2010 look strange.
kdkd says: “Yes. If that trend continues through beyond the solar minimum there is certainly something going on. ”
If one wanted to blame the recent flattening of OHC on the solar minimum, wouldn’t one first need to illustrate the solar signal in the OHC data?
kdkd says: “And while Bob’s busy not rejecting Hansen’s work…”
Actually I had discussed the paper a few weeks ago:
I presented it as a reference only for the use of 6-year trends.
Yes, the quick and dirty way to do it would be with a linear model:
OHC = f( CO2. Solar)
You use the whole data set, not just the last 5 years. Then check the residuals for the last 5 years, and compare them to the residuals just using CO2 and just using Solar. Point me to the data set that contains solar forcing, and I’ll do it.
kdkd says: “Point me to the data set that contains solar forcing, and I’ll do it.”
You’re welcome to try. Since the SST anomaly lag from variations in TSI varies from months to decades, it’ll be interesting to see what you find the lag is for OHC.
Here’s a link to the GISS forcings?
As you’ll note they end in 2003. So you’d have to splice on TSI data or sunspot numbers.
kdkd: Correction to my latest reply. The clause, “Since the SST anomaly lag from variations in TSI varies from months to decades…” should read, “Since the SST anomaly lag from variations in TSI varies from months to decades depending on the study…”
I’m a behavioural scientist, not a natural scientist, and my work has been exclusively outside the area of time series. So wrong (as in violation of assumptions) is correct. I’m not sure what you’re trying to acheive though. It’s pretty clear from visual inspection that the slope post 2003 is quite different from the slope before. The question is that in context, is this meaningful. The solar minimum, and/or a change in instrumentation are both compelling explanations for this. The denier assertion that “Global Warming has Stopped” seems a much shakier proposition to justify.
Good enough is a different question. I’m also aware of the pitfalls of using confidence intervals in lieu of a better statistical test, but in practice here, where power is so low that’s unlikely to make a huge difference to the findings, except in the case where an overlapping confidence interval hides a stastically significant effect, in which case we can be fairly sure that the effect size is low.
1. Where does sea level come into this? I looked at the ocean heat data, as provided to me. Is this a typo? Or changing the subject?
1. I bet you can find similar magnitudes in slope change of the OHC data elsewhere in the series.
2. I seen no comments on weather the line of best fit as provided by Bob (with the arbitrary shift in intercept to go through the data point with the highest residual) predicts better than the naive fit.
3. I see no justification for placing the line of best fit as intercecting the most extreme residual.
4. I see no attempt to compare whether Tamino’s line of best fit extending the previous line provides a statistically significantly different prediction than Bob’s line with the shift in intercept. Remember the topic is whether bob’s analysis is meaningful in context, not does the slope vary at different points along the time series, which is clearly does, at various points.
It seems to me you should have avoided claiming that statistics show the slope post 2003 is not different from the slope before. That you did make this false claim and now seems to say what you claimed was clearly false is likely to make people wonder whatyou are trying to achieve and what tactics you think are permissible to achieve your goal.
Both the question of whether or not the trend has changed and the question of what it means if it has are meaningful. If the trend has changed (and it appears you now think it clearly has), then you don’t get to claim it did not merely because you worry that someone somewhere might interpret the observation to mean that “Global Warming has Stopped”. What you can legitimately do is do the work to attribute the slow down to something other than Global Warming having stopped. So do it. (If you can.)
It’s not legitimate to decree that people are required to ignore your mistaken claims based on incorrect application of statistics by insisting that they remember what you think “the topic” ought to be. (Especially since you aren’t even the blog author. You are a visitor in comments!)
If you post comments lecturing people on how to do statistics and then proceed to do things incorrectly, people are going to point out that your analysis and claims are full of errors. Might people discussing your errors derail the conversation from the track you would like it to follow? Sure. Your remedy is to try to stop trying to support your argument with clearly incorrect statistical arguments.
You need to do a bit of self tutoring on the meaning of statistical power and how that interacts with your previous mistake. The mistake you make when you decree that two means are not statistically different if the confidence intervals overlap is to increase type I error above the level specified (always a no-no) while simultaneously reducing power (i.e. increasing type II error.) Justifying reducing power on the basis that power is already low is silly.
in which case we can be fairly sure that the effect size is low.
This is just wrong. If you get statistically significant results with very little data, the size effect is usually high. The reason is that if the size effect was small, when the amount of data is limited, the power would be very low and the difference being tested would have a low probability of being detected. In contrast, if the size effect is high you can sometimes detect things with very little data.
You can easily convince yourself of this by either: a) reading some undergraduate text books discussing power or b) setting up some test with synthetic data and running monte carlo to see how power interacts with size effect.
Well I had a reply to lucia, but it got eaten in a technical glitch.
She’s clearly not reading carefully. I’m not claiming that “the slope post 2003 is not different from the slope before.”
The other main point I wanted to make is that the power of this analysis is very low. One way of looking at this is considering the technique that RomanM used to look for changes in gradient, there are a number of places on the series where a change in gradient of a similar absolute magnitude and duration can be observed.
It is difficult to follow your arguments. You now state: ” I’m not claiming that “the slope post 2003 is not different from the slope before.”
However, in the link you provide, you state: “Note that Both terms overlap the confidence intervals for all data which is a quick and dirty way of showing that there’s likely no statistically significant difference between the slope and intercept for 1993.”
I do not see how you can reconcile these two statements.
You admit to not being a time-series expert and yet you adopt an insulting attitude (“How to Lie with Statistics”, “I’ll start taking you guys seriously”). It would probably be best to adopt a little humility while you are being schooled by other professionals (who include a professor of statistics). In that case you would protect some of your own credibility and you would be less harmed by foolish comments like “Using quarterly data would not make sense due to the planet’s annual cycle” which makes it appear that you have no experience at all in climate time-series analysis.
To put this in terms you can understand: right now you look like Daryll Cullinan facing Shane Warne on a crumbling track on day 5.
I won’t bother addressing some parts of your comment because Lucia has already done a good job on indicating some of the shortcomings with your “analysis”. However, from a technical viewpoint, fitting the lines separately is an inefficient use of the information in the data with regard to uncertainty estimates and severely reduces the ability to detect smaller changes.
With regard to your five “points”:
I apologize if I caused you confusion with my reference to “sea level”. I had been thinking of OHC informally as “sea heat level” when somewhat hastily writing the R script and neglected to switch gears when I posted the comment. Since you supposedly had the data yourself, you could have easily verified that in fact I was working with the OHC data set, not some actual “sea level” data.
Whether you can find “similar magnitudes in slope change of the OHC data elsewhere in the series” is not the issue. I was replying to your criticisms of Bob’s work where you had supposedly shown that
The results I posted indicate that the first point is patently false if you actually use all of the information available instead of fallacious armwaving. It is reasonably clear that the difference in the trends cannot be attributed completely to the random variation of the data.
Your second point is pure nonsense because whether the post 1993 trend is or is not equal to zero is not at issue nor does a zero trend obviate the use of the data. The conclusion drawn by you further indicates that your grasp of statistics is at best naive.
By the way, the “optimum choice for a changepoint in the trend for the data turns out to be the third quarter of 2004. The pre 2004.5 trend is 0.757 with a change of -0.563 (making the post 2004.5 trend equal to .194). If you can’t show this yourself, I will gladly post a script for to show this.
I don’t see that Bob claims to be predicting anything. His graph merely demonstrates how, starting at 2003, the observed values of heat content successively differ from what they would be if the increase were to continue at exactly the previously estimated rate.
I would not do it this way, nor would I necessarily use the annual data either when the quarterly data is available and more informative. This graph is what I would think is a reasonable presentation although one could then extend the left hand segment to illustrate what Bob was trying to show.
If you look at the graph that I link to above, you would be hard put to find a lot of places where “the slope varies at different points along the time series, which is clearly does, at various points.” I also don’t understand the preoccupation with the “shift in intercept”.
Also, in this particular case, the intercept is not a physically interpretable quantity. It is an artefact of fitting a line to data where the predictor is a date. The only way that it might be meaningful would be if the actual OHC at time zero was a relevant issue.
Now, if you want to look at a good example of the use of Tamino’s simple extrapolation:
you need go any further than looking at the RC graph at the top of this post.
When describing the graph, Gavin writes at RC:
Looks good to you, doesn’t it? Now, notice how the OHC, GISS-ER and the ensmble mean all track each other so very closely (due in probably major part to unspecified methods of alignment) until 2003. At that point, OHC jumps, GISS-ER jumps a greater amount… but we are supposed to believe that the ensemble mean (whose data does not seem to exist) will magically continue its march at the same old pace until it accidently meets the OHC coming down. Nice “prediction by extrapolation”, Gavin.
Finally, let me give you a further piece of advice. You evidently seem to think that it is reasonable to disparage those who disagree with you, by calling them deniers. I personally have developed a low level of tolerance to this type of ignorant and abusive behaviour so although I try to keep the discussion on an intellectual level, it sometimes becomes more difficult to keep from stooping to such childishness myself. My advice is to limit this type of language to those blogs such as Tamino’s or RC where such denigrating others is the rule rather than an unfortunate exception. If you do, you might find the conversation more congenial … and possibly learn something useful in the process.
Roman, did you mean 2003 rather than 1993?
Oooops, yes, LL. Thanks.
I tried to catch all my typos, but sometimes I miss. There are also some grammatical errors in there as well although I re-read the stuff several times.
FWIW, here are the confidence intervals I calculated based on 2003 and after quarterly data.
From what I can ascertain, kdkd’s primary objection to using quarterly data was that we needed to correct for the annual cycle and autocorrelation. As Lucia mentioned, the annual cycle is already removed (given that these are anomalies), and the auto-correlation is corrected for in 2 of the 3 methods below.
Simple OLS: [-.055, .209]
GLS (AC corrected): [-.090, .293]
Monte Carlo (with AR(1) noise): [-.064, .216]
Script can be found here
The statement “This shows that the regression slope and intercept are not significantly different from zero. This shows that the linear model from 2003 and beyond does not predict better than chance.” found in kdkd’s script appears to be simply confused(as Roman mentioned above). All it means is that we don’t exclude the possibility of a 0 slope, or even a slightly negative trend with 95% confidence — which is to be expected, given the small trend for 2003 and on.
This does not account for errors in the observations, or in the models. But it does suggest that based on this dataset, the 95% confidence interval does not include the slope of 0.7.
Were I you, I would just ignore him.
Responding only dignifies “The Bulldog”, and does not influence his fanatically warmist acolytes. You waste your breath.
Ignore the cur.
Oh, ignore ‘kdkd’ as well.
Only half his/her brain is functioning. A dreadful mix of valid comments & sheer nonsense. 50:50? That’s generous!
Responses to various:
My point was that Bob’s original analysis is misleading and wrong. Way more wrong and subjective than what I did. There is especially no justification to shift the intercept arbitrarily to go through the point with the highest residual. I haven’t seen anything to justify this at all, and it seems to be something that you’re all ignoring. Presumably because those of you who are statistically literate know it’s wrong.
Additionally, it’s hard to demonstrate that post-2003 is any different to other points within the series – the there are apparent shifts in slope of similar magnitude and duration elsewhere in the series is quite high. Especially if you use the quarterly data.
Finally, I didn’t exclude the post 2003 data in my analysis when I concluded that the latter data does not significantly change the slope – that was regression with post 2003 excluded compared to without post-2003 excluded. An imperfect methodology, but indicitive of an effect that would be large enough to establish the validity of Bob’s conclusions (which is didn’t). Again I’ve seen lots that criticises my methodology (understandably, both because I took shortcuts, and because it’s a denier blog), but nothing that shows the validity of Bob’s methods.
I can’t accept that post 2003 should be treated as independently from the pre-2003 data unless shown compelling evidence to the contrary because there are other similar changes in the series, and post 2003 is not a unique event.
kdkd says: My point was that Bob’s original analysis is misleading and wrong.”
There’s nothing misleading or wrong with this graph:
It is a trend comparison. I’ve discussed how I determined the model projection trend of 0.7*10^22 Joules per year, and I showed that trend. I did not make any claim about the intercept. It’s your assumptions about the presentation that make you believe it’s misleading. I’ve illustrated it different ways in the post and the result is the same. The linear trend of the projection is about 14 times higher than the observations.
I will echo what RomanM said above, “I also don’t understand the preoccupation with the ‘shift in intercept’. “
I’m not ignoring this. At my blog, I posted a whole blog post in which I discussed the main problem with this post. It’s that the “projection” shown in red is not a projection. So, the test doesn’t tell us much because we don’t know what model E runs really projected. Over at my blog, Bob, Chad, SteveMosher and have been discussing the need for real projections, and Chad has downloaded data from PCMDI so that Bob could have the information required to place the ModelE data correctly. Since Bob’s already involved in that conversation, I don’t feel much need to repeat myself here. FWIW: Bob has agreed he’d love to have real OHC projections so he can do this right.
But you are here posting statistical non-sense. So, I came here to point out that your ‘analyses’ are tosh.
After that poor response, I withdraw my comparison of you to Daryll Cullinan facing Shane Warne on a crumbling track. You are more like Sultan Zarawani facing Alan Donald. Without a helmet.
(Apologies to non-cricketers here, but Mr. kdkd was the first to bring up the cricketing analogies so he is fair game. For anyone wanting to understand my cryptic reference to a piece of cricketing folklore, you can look here: http://www.tmsb-exiles.org/forum/index.php?topic=5264.0 I think the analogy is quite apt).
kdkd: To expand on Lucia’s reply, when I have access to the GISS Model-EH and -ER projections, I will present a comparison graph of the observations and the model mean of both models for the full term of the data (1955-2010), using the base years of 1955 to 2010. That way I can’t be accused of cherry picking the base years, too. Where the model means interesect with the 2003 to 2010 data will be how I present those short-term graphs. I suspect the title of the post will be”Much Ado About Nothing”.
Don’t speculate. Show us some examples of “similar magnitude and duration”.
“An imperfect methodology”? What part of the word “garbage” do you not understand? You used an invalid comparison method which would produce a failing grade in a baby stat course, misinterpreted the results badly because you didn’t understand what you were doing and then failed to “establish the validity of Bob’s conclusions ” with such nonsense. WTF are you smoking?
You were criticised for this? Quelle surprise! Your response however is to again throw out an abusive epithet to confirm that knowledge of statistics seems not to be the only intellectual shortcoming that you possess. What an [self-snip]!
There is especially no justification to shift the intercept arbitrarily to go through the point with the highest residual. I haven’t seen anything to justify this at all, and it seems to be something that you’re all ignoring. Presumably because those of you who are statistically literate know it’s wrong.
No, they’re being polite. As Roman said, “in this particular case, the intercept is not a physically interpretable quantity.” Tamino’s criticism dealt with the shift in intercept of the GISS model projections…a criticism which I believe may have some merit. However, you seem to have gotten confused and picked up the charge attacking the “shift in intercept” of the OLS fit to the OHC data, which is simply a result of regressing against different time periods and has nothing to do with anything.
Additionally, it’s hard to demonstrate that post-2003 is any different to other points within the series – the there are apparent shifts in slope of similar magnitude and duration elsewhere in the series is quite high.
Roman has shown that the optimal changepoint would actually be in 2004.5. I believe this would show a larger disparity between Bob’s GISS projection trend and the OLS trend fit to modern year OHC, not a smaller one.
Again I’ve seen lots that criticises my methodology (understandably, both because I took shortcuts, and because it’s a denier blog), but nothing that shows the validity of Bob’s methods.
Yes, you came out guns a-blazing, mentioned your “substantial experience with doing OLS type analysis”, and then made a variety of incorrect statements in your criticisms that were easy to rebut. Given that your criticisms have fallen flat, you now claim you want us to “show the validity of Bob’s conclusions”.
If I simply took his graph to mean that given this dataset, there is a statistically significant difference in the trend between the post 2003 data and a slope of 0.7, I would say the conclusion is valid.
It is quite possible to argue over whether a slope of 0.7 is indicative of the projected GISS OHC trend during this period, or whether this is the ideal dataset to use, or whether there are significant errors in the measurements. But you didn’t argue that…instead, you made claims suggesting that if an OLS fitted slope does not exclude 0 as a possible slope from it’s 95% confidence interval, that calculated trend is somehow invalid.
Nope, bob arbitrarily shifted the intercept to go through the point with the largest residual.
No. I suggested that because this 2003 to 2010 period regression does not predict better than chance that there’s no way for Bob’s methodology to be valid, as in isolation there’s no trend to predict from. As the statistical significance of correlations are proportional to sample size, as you will know, with enough data, every correlation will become staistically significant.
I was specifically objecting to the removal of context (prior data points) and using subjective methodology that smacks of confirmation bias.
It’s fine to critique the flaws in my methodology, but I’m unhappy that you’re deliberately misrepresenting what I was trying to say.
Nope, bob arbitrarily shifted the intercept to go through the point with the largest residual.
You are not clarifying…which intercept are you accusing Bob of shifting? From previous comments, it sounded like the OLS fit on OHC data. If so, please explain what you believe the intercept should be for a linear fit of 2003-present data, what Bob’s was, and why this is important. If the GISS projection line, as I mentioned above, and Lucia has brought up, that’s a separate issue.
No. I suggested that because this 2003 to 2010 period regression does not predict better than chance that there’s no way for Bob’s methodology to be valid, as in isolation there’s no trend to predict from. As the statistical significance of correlations are proportional to sample size, as you will know, with enough data, every correlation will become staistically significant.
Predict *what* better than chance? Do you mean that “time” in this case has little to no explanatory power WRT to OHC post2003? Well of course that would be the case with a small slope! Take the example where the slope is 0. If the trend is “0”, you are not going to (and *shouldn’t* achieve) an r value that suggests a statistically significant relationship between time and value, between there ISN’T a relationship between the two. So you can call the model “invalid”, but that doesn’t change the fact that the trend is 0.
Try the following code, which creates 1000 white noise points with no underlying trend. Your confidence interval will not exclude 0, so you won’t acheive a “significant” relationship between time and the variable. Does this mean we can never say that the trend is 0?
data.lm<-lm(data ~ years)
You’re not looking for “statistical significance of correlations” when determining the trend…you should be looking at the confidence intervals to see the range of possible slopes.
Yes, I understand your basic criticism is that you believe Bob is cherry-picking his start date. It sure sounded like you were trying to prove this by suggesting because the 2003-now trend is statistically insignficant (i.e., not significantly different than 0), his method was somehow invalid:
“This shows that the regression slope and intercept are not significantly different from zero. This shows that the linear model from 2003 and beyond does not predict better than chance.”
Obviously, suggesting such a thing is nonsense. However, if I’ve misrepresented, perhaps you could clarify.
There’s no compelling argument to show that post 2003 should be treated as independent of pre-2003 data, without arbitrarily treating other parts of the series as also independent of each other. Or at least to show that 2003 should be treated as independent you have to demonstrate how it is clearly different from other similar sections of the time series. The fact that it’s at an end point is not relevant except as a caution to guard against confirmation bias through taking due care.
And if you put 10,000 data points in your model, the chance of it being “statistically significant” are quite high (yes I tested it to make sure). These correlational methods are quite low power for small samples, and can produce misleading results at high sample sizes. However for enough data points and real data a large N (e.g. around 100+, or maybe 30+ where we can be confident about the assumptions being met), a slope of around zero is good evidence for no trend.
However related to Bob’s analysis, a slope indistinguishable from zero is strong evidence for “not enough data to draw conclusions using that methodology”.
Actually in this instance with a post 2003 model, I am correct. The F statistic is not significant indicating that the model in isolation is not valid.
I don’t need to comment about the statistical issues here, but I will give you a bit of advice. Abusive epithets never help. You have a great deal to learn about presenting a constructive argument in a more-or-less public place. How exactly do you think phrases like “loony greens”, or “crazy alarmists” would go over at Real Climate?
You explained above that you are not a physical scientist. So I ask you to consider than many accomplished physical scientists have carefully considered doubts about the predicted magnitude of future GHG driven warming, and the predicted consequences of that warming. I am certain that many of these people are more capable than you of evaluating the quality of the technical arguments that are advanced in support of extreme future warming and the quality of the data that is used to support those arguments. A bit of humility in place of epithets would serve you well.
By using the word “residual” it is obvious you are confusing the GISS model trend projection with the 1993 – 2002 OHC data trendline which Tamino used. Bob has not shifted the OHC trendline to “the point with the largest residual.” He has compared the actual OHC heat accumulation with the GISS model trend projection starting at 2003. The heat accumulation at T(zero)=2003 is zero for both sides of the comparison.
Bob discusses Tamino’s confusing switch from GISS projection to OHC trend in his post above:
kdkd says: “…because it’s a denier blog.”
Since you insist on categories, I’m a lukewarmer. It’s my understanding that Lucia is also. I don’t know about the others discussing this with you. For some, their replies to you on this thread are their first comments here.
1. On residuals
This is splitting hairs, and completely missing the point, or misunderstanding statistical theory, or both. He’s still projecting the trend line starting at the point with the highest deviation between prediction and actuality. Whatever way you spin it, that smells badly of confirmation bias. Projecting the trend line from the mean residual (i.e. zero) is valid. Projecting it from a substantial number of standard deviations of residuals above the mean residual is self-evidently a violation of the assumptions of regression.
2. Still no justification as to why to treat the terminal part of the series as a special case, and not examine elsewhere in the series that the trend may have changed by a significant magnitude.
3. On ideology
Since I came across this post via a denier blog with a notoriously poor idea of free exchange of ideas (to with Marohassy’s appalling performance on Q&A on the Austraian ABC several months ago), you’ll excuse me for not taking this statement at face value.
4. More on ideology.
And I’m sick of treating denier arguments as genuine. A contrarian attitude is fine. I’m very concerned about the failure to guard against confirmation bias in your argument.
So could you please show where the quality peer reviewed literature clearly demonstrates that these “considered doubts” are supportable. While you’re at it can you explain why current observations are tracking the upper end of the IPCC projections?
kdkd: I see you’re still complaining about the 2003 start for the short-term comparisons. If you haven’t read the post above, I explained the reasons for 2003. Start at the heading of 2003 DIDN’T ALWAYS PROVIDE THE LOWEST TREND FOR A SHORT-TERM OHC GRAPH. The discussion continues in the next topic ROGER PIELKE, SR’s LITMUS TEST FOR GLOBAL WARMING.
You choose not to accept the reality of the start year. That’s your choice, but your belaboring the point has grown tiresome.
The start year shouldn’t really make any difference. The fact that you insist that it must be 2003 strongly suggests that your argument is dependent on a cherry pick.
kdkd said: “3. On ideology
Since you insist on categories, I’m a lukewarmer.
Since I came across this post via a denier blog with a notoriously poor idea of free exchange of ideas (to with Marohassy’s appalling performance on Q&A on the Austraian ABC several months ago), you’ll excuse me for not taking this statement at face value.”
I can only respond with: “HUH!!!” “Denier” and “Marohassy” are give aways to the hubris under which you are operating. Are you Luke (being good) in disguise?
Very, very tiresome indeed!
kdkd says: “The start year shouldn’t really make any difference. The fact that you insist that it must be 2003 strongly suggests that your argument is dependent on a cherry pick.”
Thanks for the laugh, kdkd. You know exactly what portion of your May 18, 2011 at 5:27 am comment I was responding to. If not, I’ll give you an instant replay. You wrote, “I was specifically objecting to the removal of context (prior data points) and using subjective methodology that smacks of confirmation bias.”
Have a good day.
I give up. You seem to be deliberately trying avoid understanding what’s wrong with your argument. Unless you can demonstrate objectively there’s some reason to treat the post 2003 data as if it’s independent of the rest of the series, then what you’ve done is not justifiable.
But that’s ok, horses, water and drinking etc.
“While you’re at it can you explain why current observations are tracking the upper end of the IPCC projections?”
Please provide some references for this bold claim. On the contrary, the analyses I have seen (for example at Lucia’s blog) suggest that realized anomalies for global temperatures are about two standard deviations below IPCC projections. See here for example: http://rankexploits.com/musings/2011/hadley-march-anomaly-0-318c-up/
If you insist on peer-reviewed studies, troposphere temperature trends (sometimes called a fingerprint of global warning) appear to be significantly lower than those projected by models: http://onlinelibrary.wiley.com/doi/10.1002/asl.290/abstract
I’m afraid that with your reliance on cheap insults and unsubstantiated claims, you are just losing more credibility.
“And I’m sick of treating denier arguments as genuine. A contrarian attitude is fine.”
You really do not get that insulting people who disagree with you is counterproductive.
It seems to me you are confusing an honest scientific skepticism with some kind of “denial”. The current IPCC projected range for equilibrium warming is ~2C to ~4.5C for 3.71 watts/M^2 of added GHG forcing (doubling of CO2 or equivalent). After reviewing a lot of publications and data, and evaluating what among all that seems most credible, my best guess is that the equilibrium warming is somewhere between 1.5C and 2C for 3.71 watts/M^2 forcing, with the most likely value being ~1.8C. Does this make me a ‘denier’ according to your definition? If so, what exactly do you think I am in denial of?
kdkd says: “I give up. You seem to be deliberately trying avoid understanding what’s wrong with your argument. Unless you can demonstrate objectively there’s some reason to treat the post 2003 data as if it’s independent of the rest of the series, then what you’ve done is not justifiable.”
The obvious eludes you, kdkd.
Do you recall the title of the post that included the graph that you and Tamino have taken exception to? The title was “ARGO-Era NODC Ocean Heat Content Data (0-700 Meters) Through December 2010”. That post was about ARGO-era Ocean Heat Content data. “ARGO-Era” even appears in the title of Figure 1 of this post. I explained that in this post, and your failure to understand that reinforces my belief that you have not read it.
To paraphrase John Grisham in “The Confession”, you’re blinded by your tunnel vision and your fear that others might be right.
Is there any reason for you to continue commenting here?
“However related to Bob’s analysis, a slope indistinguishable from zero is strong evidence for “not enough data to draw conclusions using that methodology”.
“This shows that the regression slope and intercept are not significantly different from zero. This shows that the linear model from 2003 and beyond does not predict better than chance.”
Actually in this instance with a post 2003 model, I am correct. The F statistic is not significant indicating that the model in isolation is not valid.
No, this is nonsense. I’ve explained why both on theoretical grounds (that a lower slope means a lower correlation between time and your variable) and by giving you a script to fiddle with. That you don’t understand this simple concept suggests that indeed, your “experience with time series is somewhat [very?] limited.” If you wish, I suggest you go to Tamino himself with the your following quote:
“This shows that the regression slope and intercept are not significantly different from zero. This shows that the linear model from 2003 and beyond does not predict better than chance.”
and ask him if it makes any sense. He will likely answer that Bob cherry-picked 2003 (indeed, he made that claim in his post), but if he’s honest he will point out that your specific argument is nonsense. At that point, you can call him a denier as well.
Furthermore, you keep claiming
unless you can demonstrate objectively there’s some reason to treat the post 2003 data as if it’s independent of the rest of the series
Bob has given you his reasons. Roman has given you an objective reason why 2004.5 is a valid change-point. If you want to continue claiming that Bob cherry-picked 2003 because you agree with Tamino, that’s fine, but your statistical”methods” here are nonsense and have shown nothing of the sort.
Looks like in my above post the “cite” didn’t work out right. Everything in the following quote was from kdkd:
However related to Bob’s analysis, a slope indistinguishable from zero is strong evidence for “not enough data to draw conclusions using that methodology.
This shows that the regression slope and intercept are not significantly different from zero. This shows that the linear model from 2003 and beyond does not predict better than chance.
Actually in this instance with a post 2003 model, I am correct. The F statistic is not significant indicating that the model in isolation is not valid.
Bob and kdkd mentioned possible solar influence on OHC several comments back and I decidede to check it out. I used sunspot numbers for solar data. After loading the OHC and SSN data, I converted the sunspot numbers from monthly to quarterly. Here is the scatterplot of OHC vs SSN and fitted slope. According to this SSN has a significantly negative relationship with OHC. Is anyone else besides me surprised by this?
As for musings on the impact which removing the influence of the solar minimum would have on the post 2003 slope of the residuals, it is negative, not positive.
When I get time, I will double check my work and look into lagged relationships as well.
Layman Lurker says: “According to this SSN has a significantly negative relationship with OHC. Is anyone else besides me surprised by this?”
Layman Lurker: Sorry for the brief answer. I got called away for a little while.
Hansen’s heat uptake graph from the recent draft of his paper was revealing. It’s actually a graph of running 6-year trends of global OHC. If we compare the global OHC heat uptake to Tropical Pacific heat uptake calculated the same way, we can see that global OHC uptake appears to be a function of Tropical Pacific OHC uptake.
Refer to the post:
We know that tropical Pacific OHC is a function of ENSO. So that’s the basis for my earlier reply.
Are you trying to convince us that you are a MBBD (Meaner, Bigger, Badder Buldog) than Tamino? If so, IMO, your “mean” argumentative points won/loss ratio (comments agreeing with you /total number of comments disagreeing with you)) is statistically zero. Maybe it time to consider picking up your bone and going home.
Yeah you can argue that the F statistic is misleading. However I’m inclined not to, as the sample size is so low (and substituting quarterly data is a big cheat).
However, 2003 is certainly a cherry pick. To demonstrate that there’s a real effect, Bob would need to demonstrate that there’s something special about this date, to wit that there’s something unique in the data set for the short term post-2003 range of dates compared to other changes short range during the time series. Treating the end of the series as a special case without looking at its context with the rest of the data is totally invalid. There’s a reasonably simple remedy for this, but it seems that you’re collectively reluctant to address it.
Anyway this is going around in circles, and I’m left with the usual impression of agreement with a sloppy argument, and fiddling around the edges to cast doubt on the criticism, but not dealing with the substance of the problem with the original argument.
There’s really no point in engaging with people who a priori reject the IPCC documents and process as radical alarmist documents, when it’s very clear that they’re extremely conservative and hobbled by political constraints. So I’m sick of this forray into the denier’s dens. Standards here are better than elsewhere, but it’s interesting the various ways that groupthink is maintained from total irrationality and shared paranoia (elsewhere) to highly selective criticism of others methods but avoiding the main problems (here).
kdkd says, “Bye all.”
As a farewell note, I had read your May 14, 2011 at 9:24 am comment on Tamino’s post that was the subject of this thread.
The one in which you wrote the following, assumedly with respect to my post:
“clever or incompetent.
“Based on his exaltation of Excel as the arbiter of all linear slopes, I suspect incompetent.”
Based on it, I had considered deleting your comment when you first arrived here and noting why I had. I’m glad I didn’t. You have successfully highlighted for all who read this thread in the months and years to come your inability to grasp the topic of conversation (ARGO-era data), your need to use derogatory terms for those who oppose your point of view, which is a sign of the weaknesses of your error-filled arguments, and your willingness to dig yourself deeper into a hole with each reply.
Damn, I thought we had seen the last of him. If he wasn’t so completely incompetent in statistics, he would be dangerous. 🙂
Anyway, his penultimate (we can hope!) comment requires a response:
Never mind that he cannot fulfill my request to point out similar “changes” elsewhere in the data either because of his limited capability or because there aren’t so many.
Bob has explained pretty clearly that his post had to do with the way the GISS model related to the OHC as pictured in the RC graph shown in his head post. If kd had bothered to look at the original graph location, he would have been able to read the following [bold mine]:
Now even though I find it fascinating that someone can baseline something on two different criteria over non-overlapping time intervals without affecting the trends (or can they? – maybe our “expert” commenter kd can explain exactly how that was done), he would notice that in fact manipulation had taken place which apparently tied together the various processes during the 1993 to 2003 period. This would mean that the GISS model and the OHC hasd been matched together to be approximately equal at 2003.
Now, I would ask kd, just where would one reasonably start looking at a comparison for the behaviour of the model versus the OHS measurements? In 1993, where they seem to have been matched, or might you not begin at the point where they could possibly begin diverging – 2003. Now you see, I personally would consider that there was “something special about this date”, but hey, I seem to have been relegated to denier status, so WTF do I know. Presumably, that is why we have people like kd out there who are kind enough to correct the error of our ways due to our “total irrationality and shared paranoia.”
From the last phrase, kd, my guess is that you are probably as poor a psychologist as you are a statistician – and I should know what a good one looks like – I’ve been married to her for a long time.
Sayonara, kd. I don’t think your abusive drivel will be missed.
Amber Heald, thanks for the kind words. Unfortunately, I inadvertantly deleted your comment.
Pingback: ARGO-Era Start Year: 2003 vs 2004 | Bob Tisdale – Climate Observations
Your argument only works if you ignore statistical significance. There are clearly many points on the OHC chart where the slope changes are of similar magnitude and duration (around 1960, 1970, late 80s and early 90s from visual inspection – feel free to confirm that formally). You’re special casing the terminal part of the graph for no good reason. Instrumental error is a fine hypothesis, but this technique will in no way resolve that question. And I still see no good treatment of the uncertainty of Bob’s revised slope versus the original extrapolation.
Anyway I can see how this type of dialogue helps people silo into their own communities that assist the development of groupthink and confirmation bias. The way I’ve handled this discussion doesn’t help, but I think the correct way to deal with denier arguments is a little more indirect than I have been, and more in the enough rope mode. Unfortunately quite a lot more work. We will see how the figures evolve over the coming years, which is the only real way to resolve this current impasse.
And now I am gone.
kdkd: This follow-up post is for you:
kd, you’re back and still have nothing substantial to offer.
So you can’t do it yourself as I requested. Show me the money! Don’t ever claim anything that you can’t back up with proper evidence. It only serves to destroy your credibility.
“Instrumental error” is not a “hypothesis” having anything to do with this situation. You side-stepped addressing the specific reason that I gave for why 2003 is a perfectly legitimate reason for starting a comparison.
Then you were incapable of understanding the analysis that I presented to you originally in my first comment. Again, insufficient statistical understanding to back up your earlier arrogant claims of “substantial experience with doing OLS type analysis”.
As my mother once said, “If you don’t have the horses, you can’t pull the wagon”.
You should really take my advice and stop calling people demeaning names. Psychologists consider that to be a sign of immaturity.
No I understand your analysis perfectly well. And you’ll find that if you do the analysis you’ll likely find similar magnitudes of slope change elsewhere in the series. That’s actually quite evident from visual inspection.
It so happens that there are some graphics in this poster that if the post 2003 data was spliced onto the end it would be clearly normal error for any of the lines of best fit that have been presented in this discussion.
In other words, let me be quite clear: special casing the terminal part of the series, treating it as if it was independent to prior in the series is not valid and has very high error, as well as violating other assumptions (your original post did not do this, except in the instance of not looking at other similar changes on the graph). Pretending otherwise is a terminal problem with the argument people are trying to support on this blog.
It is funny that you regularly refer to confirmation bias in your rather ill-informed posts. RomanM speculates that you are a psychologist. If that is the case, you will know what projection is.
You clearly believed the faulty premise that “…current observations are tracking the upper end of the IPCC projections”. When it was pointed out to you that the opposite is true, you refuse to accept the facts and just say there is no point in engaging. You do not defend your erroneous position, you do not admit your error and you clearly don’t change your thinking. It is quite ironic that you call others deniers.
Yes, quite. I see denialists projecting all the time. No I’m not that kind of psychologist (or any kind of psychologist at all these days as it happens, and for the most part I’ve not been doing any serious statistics for a couple of years either, but I miss that work and intend to return to it in due course).
The Copenhagen Diagnosis clearly shows a number indicators (not just quantitative metrics of temperature or heat which is what the people here seem to have latched on to) tracking the upper bounds of the IPCC’s projections. A denialist blog post on what is effectively a single point estimate’s predicted accuracy from a noisy data set, and a methodology paper co-authored by a prominent denialist are not strong evidence to base an argument on.
With the blog post, I’ll reject anything that relies on mere months of data to pose and answer climate questions. For the same reason I’m dubious about the sub-decadal time scales that Bob uses.
Regarding the Atmospheric Science paper, I lack the expertise to fully assess it’s validity, although I see it’s an analysis of tropical temperatures, which it’s clear in anthropogenic climate change scenarios one of the least sensitive parts of the system, at least in the early stages (you have to think in decadal, centurion and millenial time scales to fully appreciate the nature and scale of the climate change problem, something else the deniers are omitting from their arguments). How about they replicate the same methods with arctic temperature and get back to us? And maybe using some paleoclimate measures for an approximation of an independent validation. Are you claiming this is some kind of smoking gun paper? There’s a lot more validation required for that to be the a supportable argument. My suspicion is that rather than a temporal cherry pick which is the usual denier’s trick, this is a geographic cherry pick. But if there are subsequent papers that show that show independent validation of their findings, then I’m happy to be corrected.
Please deprive me of the oxygen of replying to my posts and I’ll be happy to leave you in peace.
kdkd says: “Please deprive me of the oxygen of replying to my posts and I’ll be happy to leave you in peace.”
There’s nothing that requires you to respond when others reply to you. It’s your choice. After your two previous promises to leave, you appear to be parroting dogma now anyway. In all likelyhood, the your recent preachings will be overlooked by most who read the comments here. Those who do bother to read them will probably be looking for an excuse to chuckle. So you may want to save your time, go back to Tamino’s, and claim victory over those you envision to be the forces of evil.
kdkd presents a fascinating case study in trolling. S/he has added nothing of value to the conversations. Most of what s/he provides is off base, and in her/his most recent example has no comparative value. S/he says: “It so happens that there are some graphics in this poster that if the post 2003 data was spliced onto the end it would be clearly normal error for any of the lines of best fit that have been presented in this discussion. ”
Going to the reference, even the charts have no value in the discussion. Which chart makes sense to “splice” the Argo data? The core of the discussion concerned: “The GISS divergence problem: Ocean Heat Content.” using the Argo data, starting in 2003.
Now if you are a “true believer” in AGW, then splicing different data sources does not present a problem and clearly tells the appropriate story, a hockey stick. When it does not, then we get into silly arguments about error bands and cherry picking to find even another “trick” to “hide the decline” in rate of change when compared to the models.
You’ve been proven wrong from your first entry, but you persist in throwing more dung against the wall in hopes that some might stick. None has as yet, and your latest is the most egregious attempt at obfuscating the issue.
I find your logic quite astounding. You denigrate people for “latching onto … quantitative metrics of temperature or heat.” We are talking global warming. Warming. If temperatures are not the key metric, what else is? You refer to the Copenhagen Diagnosis which points to metrics like ice-sheets and sea-ice melt. It is well known that these metrics are significantly impacted by other factors – in particular wind patterns. Not that global warming has no effect, but this is clearly a secondary metric compared to temperatures. If you were honest, you would accept that.
I am not sure what you mean by describing Lucia’s analysis as “effectively a single point estimate’s predicted accuracy from a noisy data set” and “mere months of data”. Lucia is more than capable of defending herself (as you will have seen from her evisceration of your earlier analysis), but her posting is a logical and straight-forward test of the trend based on ten years of data. You made the point of “current observations are tracking the upper end of the IPCC projections.” How can this be tested except with the data available since the date of the projections? If you don’t like Lucia’s test of the IPCC AR4 model projections, perhaps you can suggest an alternate methodology.
You should also note that Lucia is not claiming no warming. Luke-warmers tend to all agree that the data sets unequivocally show a warming trend, but that the sensitivity is at the lower end of IPCC ranges. You will of course call these people deniers.
You then state: “Regarding the Atmospheric Science paper, I lack the expertise to fully assess it’s validity, although I see it’s an analysis of tropical temperatures, which it’s clear in anthropogenic climate change scenarios one of the least sensitive parts of the system.”
This is hopeless. It is an analysis of the tropical troposhere. Troposhere. (Did you miss the troposhere part?) If you read section 9.2.2 of the WG1 report for IPCC AR4, you will see that a tropical tropospheric hot spot is a direct expected result of warming temperatures. This is sometimes called a fingerprint of global warming (including by the IPCC in chapter 9).
Last response from me. To summarize your technical contributions, you
1) Demanded that confidence intervals be calculated (pointlessly for intercept as well, but we’ll ignore that for now).
This was done, but all it showed was that the slope of .7 was easily excluded.
2) Added your most extensive argument, the script post. In there you claimed that a) “There’s no significant difference between the regression model with only 1993-2002 data compared to the model with 1993-2010 data”, and b) we can’t look at the post 2003 data “in isolation” because the trend is not significantly different from 0 [???].
Both points were shown to be complete and utter nonsense. It was recommended you even check with Tamino regarding if these statements had any validity.
3) You claimed we can’t use quarterly data because of the annual cycle and autocorrelation.
It was mentioned that we use monthly anomalies, and autocorrelation was easily taken into account in the confidence intervals given above.
4) You shifted to your fascination with the intercept, but got confused WRT to whether it was the intercept of the OLS fit to OHC data vs. the GISS model projections. You refused to clarify when asked. Finally, when this was pointed out, you suggested it was “splitting hairs”.
It’s hard to figure out which points you concede, mentioning that you “took shortcuts”, that your “experience with time series is somewhat limited”, that making a proper argument would be “quite a lot more work”, etc. Other points you simply seem to ignore, or to bring up several posts later with increasingly vague language. Rather than getting more specific or technical when asked, or responding to actual points, you have continued more and more towards just parroting your previous claims. You haven’t even clarified what you feel Bob’s conclusions/hypothesis was that needed to be upheld.
I understand that you feel you *know* Bob has cherrypicked 2003. This is why you feel every rebuttal to your invalid technical arguments is simply “fiddling around the edges to cast doubt on the criticism”. You are using the “Yeah but still” form of argument:
Person A, “X is true because of Y.”
Person B, “Umm, Y is wrong.”
Person A, “Yeah, but still…”
Heck, X may be true. Perhaps this flat line in OHC is due to errors in the data, perhaps it is only a temporarily blip, perhaps the actual GISS projections when we get them will show they predicted this smaller trend. But the technical arguments you’ve given here are simply wrong, and yet you continue to repeat your claims in increasingly vague terms.
Statistically speaking, you have put your foot in your mouth several times in this thread. Take a closer look and compare the variability in each of the time segments you spoke of with the 2003 – 2010 segment. Are you sure you really want to make this claim?
An arbitrary increase in power because of an increasing N when in truth the amount of data is the same. It’s no better than copying and pasting the same series again and again.
Why don’t you test it?
The crux of this point is that with climate data its pretty meaningless to examine data on sub-decadal time scales. And that doesn’t mean just one decade. Anyway I give up. My presence here discussing a complex subject which I, and many of you are not fully competent to do so is unhelpful.
kdkd says: “My presence here discussing a complex subject which I, and many of you are not fully competent to do so is unhelpful.”
Since you’ve already proven you lack the competence for this discussion, why do you continue to comment?
Bob, I can also truncate axes, produce incomplete information to assess the presented conclusions etc. I also lack the skill to deal with the wiggle room the commenters here afford themselves. However it still stands that your sub-decadal analysis is insufficient to support your point. You are looking at weather, and instrumental error, not climate.
Have a look at this graph. You can see clearly a number of deviations from the mean slope of similar magnitude to the one at the end of the series (1960 ish, 1970-ish, mid-80s and mid-90s), given the assumption that the dataset you are using is noisy rather than approaching 100% signal.
Please stop replying to my posts and I’ll be out of here 🙂
kdkd says: “Please stop replying to my posts and I’ll be out of here.”
Are you telling me that your continued nonsensical comments here are an attempt to get the last word? That’s funny. Thanks. I can always use a laugh.
This is not a reply to kdkd. I don’t know about his/her competencies but kdkd stated: “Please stop replying to my posts and I’ll be out of here.” Hopefully, he/she meant and will stand behind that statement.
Mea culpa again. As there’s some evidence to suggest that this hotspot is a short term phenomenon, my point stands that for that to be a sufficiently significant to cast the doubts on anthrpogenic climate change that you want to see, you’d have to expect to be able to replicate the same pattern of temperature change elsewhere in the climate system. I wonder if that work is in progress?
kdkd says: “However it still stands that your sub-decadal analysis is insufficient to support your point. You are looking at weather, and instrumental error, not climate.”
Once again you’ve forgotten the topic of conversation. We’re discussing Ocean Heat Content on this thread, not land surface temperature, not sea surface temperature. Ocean Heat Content is not like surface temperature observations. With ARGO, the measurement of temperature and salinity at depth is spatially complete and provides a reasonable estimate of Ocean Heat Content. That’s why it was conceived and installed. Land surface temperature measurement, on the other hand, is still spatially incomplete to this day. Ocean Heat Content is mass weighted when it is presented in Joules. There are no lags involved. There is no reason to the present the data on a decadal basis. If the models project that ocean heat content should be rising and it’s not rising, then the question needs to be asked, why isn’t it rising? The failure of OHC to rise in agreement with the models also calls into question the forcings used by the modelers to recreate the past rise in Ocean Heat Content.
Do the GISS OHC models include natural modes of ocean variability? The Model-ER used in Hansen et al (2005) does not model ENSO, that’s stated in the paper, yet ENSO is responsible for much of the rise in Global Ocean Heat Content. That’s plainly visible when you take the time to divide the oceans into logical subsets of the ocean basins. (But you’ve nonsensically referred to looking at subsets as a geographic cherry picking.) There is no mention of Atlantic Meridional Overturning Circulation or the Atlantic Multidecadal Oscillation in Hansen et al (2005), and a quick look at the GISS Model E available through the KNMI Climate Explorer as part of the IPCC data is all it takes to observe that the AMO is not included. Yet more than 30% of the rise in global Ocean Heat Content until 2005 comes from the North Atlantic, which represents only about 12% of the surface area of the global oceans. North Atlantic OHC has been dropping like a stone since 2005. If, and I recognize the significance of the word if…if the drop North Atlantic OHC is associated with the multidecadal variability of the North Atlantic, it will continue to drop for a couple of decades, and that decline will help to keep the global OHC flat during that epoch. I have noted that in past OHC updates and in posts about North Atlantic OHC, using the word if, and calling attention to the significance of the word if.
The reason for the short-term graph that’s the ultimate topic of this post is to show that Global OHC is not rising according to model projections and, hopefully, to get people to ask, how many more years do the OHC observations have to remain flat before the Models can be rejected. You can read more into that graph if you like and continue to squawk about it, but that’s all the graph is intended to present.
Hank McCard says: “Hopefully, he/she meant and will stand behind that statement.”
kdkd’s nonsense may appear to be realistic to some, so it’s tough not to reply. Hopefully, my last reply to kdkd will be my last reply to kdkd.
Yes, it appears that there are enough wiggle watchers here who will:
1. Accept subdecadal trends as climate indicators.
2. Accept the idea that climate data are fundamentally not noisy.
3. Tend to support the idea that climate indicators should monotonic.
4. Will hit many tangents in order to avoid addressing the substantive concerns with their analyses.
If you can’t accept that truncating the series to start at 2003/4 is invalid; that there are many other points in the series which show a similar change in gradient; and that the various model projections show that there’s nothing unusual about that magnitude of a change in gradient (unless it’s shown to be permanent in subsequent years for which there are not yet observations), then nobody’s going to be able to persuade you about the flaws in your analysis.
However, I accept that I’m not in a very good position to engage in the detail of this argument. The trouble is, I don’t think that you are either.
kdkd: You have again rewritten your opinions about a graph. I have responded your concerns in the post and in comments. It is obvious we disagree on the need for and purpose of that graph.
kdkd: You had reached nuisance level days ago. Your May 21, 2011 at 8:01 am comment offered nothing new. It and you are now treated as spam. You’re no longer welcome on this thread. If you feel you can offer something of substance to a future thread, feel free to comment.
Jack Greer wrote above that “A most elementary tenant of graphical presentation is that a graph must stand on its own in conveying ideas. It shouldn’t depend on external text telling people what scale applies to what graphic element; what to glean and what to ignore”
That’s not right, there is nothing wrong with explanations. “Measurements are not to provide numbers but insight.” – Ingrid Bucher.
Pingback: GISS OHC Model Trends: One Question Answered, Another Uncovered | Bob Tisdale – Climate Observations
Pingback: January to March 2011 NODC Ocean Heat Content (0-700Meters) Update and Comments | Bob Tisdale – Climate Observations
Pingback: Tisdale on 2011 ocean heat content and the GISS-Miss | Watts Up With That?
Pingback: 2nd Quarter 2011 NODC Global OHC Anomalies | Bob Tisdale – Climate Observations
Pingback: Global Ocean Heat Content Is Still Flat | Watts Up With That?
Pingback: April to June 2011 NODC Ocean Heat Content Anomalies (0-700Meters) Update and Comments | Bob Tisdale – Climate Observations
Pingback: Tisdale on Ocean Heat Content Anomalies | Watts Up With That?
Bob, thank you for allowing kdkd to post, and thank you for making the decision to cut him/her off after s/he stopped providing any substantial new ideas. It has been a very informative discussion, both on the Science and on the Psychology of a Warmist.
I think it would have been appropriate to cut him/her off when s/her made this comment:
“And I’m sick of treating denier arguments as genuine.”
At that point, you have to ask yourself…why is s/he here? It’s not for discussion; it’s to fight. It’s to make a sanctimonious public announcement that, by golly, right or wrong, “denier” arguments are not genuine.
My absolute favorite quote:
“Again I’ve seen lots that criticises my methodology (understandably, both because I took shortcuts, and because it’s a denier blog”
Well, yes, if kdkd “took shortcuts”, which is as close as s/he can come to saying they were just wrong, and wrong again, and wrong yet again, and they were saying warmist-supporting things on a warmist blog, I agree that no one would point out the problems.
That’s why I don’t visit a lot of warmist blogs.
I thought, though, that there was a policy on the word “denier”?
Pingback: GISS-ER and Ocean Heat Content « Troy's Scratchpad
Pingback: July to September 2011 NODC Ocean Heat Content Anomalies (0-700Meters) Update and Comments | Bob Tisdale – Climate Observations
I have seen it all. The group think denial moved in like a rain depression. What we have in reality after reading this (I was a climate denier until…..). Yes be very skeptical …indeed very skeptical of this blog
After investigating what kdkd has written, this is a testament to his truthfulness. White becoming black or should I say murky. Yes folks this is the murky world of climate denial at its best. One man – yes sir, exposing the whole as a fraud. One man presents a misleading graph on Watts Up and is believed without any logical questioning. Those who understand statistics should hang their heads in shame. Our world may well change forever* over the next 50 years.
* The time of the great warming will have stated to have huge impacts by 2090 and last for up to 500 to 3000 years. New evidence has emerged by a new paper released
This new finding overturns a former paper that misread and miscalculated the historical geological record and the tipping of CO2 ppm levels on ice bound or ice free Antarctica.
Received for publication 7 February 2011.
Accepted for publication 13 October 2011.
The Pagani et al (2011) reconstruction suggests that a significant and rapid episode of CO2 drawdown occurred just before and during the cooling that led to the onset of Antarctic glaciation, and the drawdown took CO2 levels to 600-700ppm – below the modelled threshold value for the initiation of Antarctic glaciation. The converse of this is that, in an ice-free world, atmospheric CO2 levels much above 600-700ppm would not favour temperatures low enough for the development of glaciers in that continent!
advisor: If you understood the subject matter, you would not have provided the off-topic link. You’ve furnished nothing that supported kdkd. Those who read your comment will realize that you find kdkd credible simply because you are like minded in your belief about anthropogenic global warming.
Have a nice day.
Bob, just an fyi. Knox and Douglas also noted in their recent paper in a peer reviewed journal that the Argo floats showed cooling … and they used a chart based on Data from 2003-2009.
“Recent energy balance of Earth”, R. S. Knox and D. H. Douglass, International Journal of Geosciences, 2010, vol. 1, no. 3
One wonders at times like this if climate warming pushers are not the real deniers.
wmb says: “Bob, just an fyi. Knox and Douglas also noted in their recent paper in a peer reviewed journal that the Argo floats showed cooling…”
Thanks, wmb. But the NODC updated their ARGO-era OHC data toward the end of 2010 and that eliminated the negative trend since 2003. See:
Pingback: October to December 2011 NODC Ocean Heat Content Anomalies (0-700Meters) Update and Comments | Bob Tisdale – Climate Observations
Pingback: October to December 2011 NODC Ocean Heat Content Anomalies (0-700Meters) Update and Comments | Watts Up With That?
Pingback: October to December 2011 NODC Ocean Heat Content Anomalies (0-700Meters) Update and Comments | TaJnB | TheAverageJoeNewsBlogg
Pingback: October to December 2011 NODC Ocean Heat Content Anomalies (0-700Meters) Update and Comments | My Blog
Pingback: Tamino Once Again Misleads His Followers | Bob Tisdale – Climate Observations
Pingback: Tamino Once Again Misleads His Followers | Watts Up With That?
Pingback: Part 2 of Tamino Once Again Misleads His Disciples | Bob Tisdale – Climate Observations
Pingback: Part 2 of Tamino Once Again Misleads His Disciples | Watts Up With That?
Pingback: Corrections to the RealClimate Presentation of Modeled Global Ocean Heat Content | Bob Tisdale – Climate Observations
Pingback: Gavin Schmidt issues corrections to the RealClimate Presentation of Modeled Global Ocean Heat Content | Watts Up With That?
Pingback: Dana1981 at SkepticalScience Tries to Mislead His Readers | Bob Tisdale – Climate Observations
Pingback: Dana Nuticelli’s Skeptical Science OHC grapple – down for the count | Watts Up With That?
Pingback: Part 2 of “On Sallenger et al (2012) – Hotspot of Accelerated Sea Level Rise on the Atlantic Coast of North America” | Bob Tisdale – Climate Observations
Pingback: Argo, profiled … Tisdale corrects | pindanpost
Pingback: Is Ocean Heat Content Data All It’s Stacked Up to Be? | Bob Tisdale – Climate Observations
Pingback: Is Ocean Heat Content Data All It’s Stacked Up to Be? | Watts Up With That?
Pingback: On Sallenger et al (2012) – Hotspot of Accelerated Sea Level Rise on the Atlantic Coast of North America | Bob Tisdale – Climate Observations
Thanks for sharing your thoughts. I really appreciate your efforts and I am waiting for your
next post thank you once again.
Pingback: Tamino Resorts to Childish Attempts at Humor But Offers Nothing of Value | Bob Tisdale – Climate Observations
Pingback: Tamino Resorts to Childish Attempts at Humor But Offers Nothing of Value | Watts Up With That?