Introduction To The NODC Ocean Heat Content Anomaly Data For Depths Of 0-2000 Meters

UPDATE:  I changed the color scheme of the first two illustrations at the request of a reader.

The National Oceanographic Data Center (NODC) recently posted a new Ocean Heat Content (OHC) anomaly dataset on its website. It is available on annual and quarterly bases, along with the data for its standard and documented dataset that covers depths of 0-700 meters. I looked for but was not able to find any papers (in any state of publication) that supported the new OHC data for 0-2000 meters. We’ll just have to wait and see how the NODC intends to present this dataset.

The data for the depths of 0-700 meters is, of course, documented in the paper Levitus et al (2009) “Global ocean heat content (1955-2008) in light of recent instrumentation problems”. Refer to Manuscript. It was revised in 2010 as noted in the October 18, 2010 post Update And Changes To NODC Ocean Heat Content Data. As described in the NODC’s explanation of ocean heat content (OHC) data changes, the changes result from “data additions and data quality control,” from a switch in base climatology, and from revised Expendable Bathythermograph (XBT) bias calculations.

COMPARISON OF GLOBAL OHC ANOMALIES: 0-700 METERS VERSUS 0-2000 METERS

Figure 1 compares the quarterly NODC OHC anomaly data for the depths of 0-700 meters and 0-2000 meters on a global basis. As noted on the illustration, the most obvious divergence between the two datasets occurs during the ARGO era. This is the period when ARGO floats became the dominant means of sampling of ocean temperatures and salinity at depth.

Figure 1

If we limit the comparison to the period from 1970 to 1999, Figure 2, we can see that there is basically no difference in the linear trends. There are minor differences from year to year, but the two datasets appear to be basically the same. Why?

Figure 2

There are extremely few observations prior to the year 2000 at depths greater than 1000 meters. This is illustrated in Figure 3. (Note that NOAA Climate Prediction Center Data Distributionwebpage breaks down the temperature profiles into depths of 0-250 meters, 250-500 meters, 500-1000 meters and 1000-5000 meters. Those depths don’t agree with the depths presented by the NODC for its Ocean Heat Content anomaly data.)

Figure 3

And Animation 1 shows a series of annual maps of the locations of temperature profiles from 1979 to 2005 for the depths 1000-5000 meters. As illustrated, there is also very little spatial coverage at these depths until the introduction of the ARGO floats.

Animation 1

As a reference, Figure 4 shows the number of temperature profiles for depths of 250 to 500 meters. There were between 2000 to 5000 temperature profiles per month between the late 1970s and the late 1990s at these depths before the ARGO floats were deployed. Note that the TAO/TRITON project (red curve) shows temperature profiles that were initially for the equatorial Pacific (coordinates approximately 8S-9N, 137E-95W). Those buoys were deployed for the study of El Niño and La Niña events. The locations were later expanded to include portions of the Tropical Atlantic and Indian Oceans under the PIRATA and RAMA projects. Refer to the TAO Project Global Arraywebpage. So while there are a good number of temperature profiles for the TAO project, they are limited in their location.

Figure 4

Figure 5 illustrates the difference between the two NODC Global Ocean Heat Content (OHC) datasets, where the 0-700 meter data has been subtracted from the 0-2000 meter data. Also referring back to Figure 3, the difference between the two datasets seems to increase in concert with the number of temperature samples at depths greater than 1000 meters. It appears as though the divergence of the 0-2000 meter dataset from the 0-700 meter data since around 2000 could be caused by the increased number of samples at depth and the increased spatial coverage of the ARGO floats, as shown in the animations. The impacts on short-term and long-term trends of the increased number of samples at depths greater than 700 meters and the impact of the increased area of observations should be determined. (A study such as that is well beyond my capabilities.) Maybe it will be documented in the NODC paper that accompanies the 0-2000 meter dataset.

Figure 5

Keep in mind, before the ARGO era, there were very few ocean temperature observations at any depth in the Southern Hemisphere south of about 40S. For example, Animation 2 is a gif animation of maps that illustrate the locations of temperature profiles for depths of 0-250 meters, 250-500 meters, 500-1000 meters, and 1000-5000 meters for the year 1995.

Animation 2

And Animation 3 shows the same series of temperature profile maps but for the year 2005.

Animation 3

A COUPLE OF QUESTIONS FOR READERS

Were the Expendable Bathythermograph (XBT) probes with wire lengths of 760 meters the most commonly used XBT probes before the ARGO era? Is this the reason the NODC originally limited the depth to 700 meters for the Ocean Heat Content anomaly data? Does anyone recall a paper that presents this? I had always assumed the depth of 700 meters was selected due to the number of and locations of observations, but I have never seen it stated in a paper.

LONG-TERM TRENDS PER OCEAN BASIN

When I originally prepared the graphs for this post, I could find no reason to present the long-term trends for the individual ocean basins of the 0-2000 meter data. The reason being, in some respects, the NODC OHC data for 0-2000 meters appears to me to simply be a 0-2000 meter OHC dataset spliced onto a 0-700 meter dataset. But on further thought, my failure to present the data might be thought by some as an attempt on my part to hide something. So Figure 6 (0-2000 meters) and Figure 7 (0-700 meters) are long-term trend comparisons of the Ocean Heat Content anomalies for the individual ocean basins as presented by the NODC. The most obvious similarity is that the long-term trends of the North Atlantic Ocean Heat Content are significantly higher than other ocean basins in both datasets, and in both, the North Atlantic Ocean Heat Content peaked in 2004. After that, there are significant declines. One would think this would lead researchers to examine the effects of the Atlantic Multidecadal Oscillation and Meridional Overturning Circulation on North Atlantic Ocean Heat Content observational data, yet, as far as I know, this is an area unexplored by climate scientists.

Figure 6

#########################

Figure 7

ARGO-ERA TRENDS PER OCEAN BASIN

The ARGO-era (2003 to present) linear trends per ocean basin for the depths of 0-2000 meters and 0-700 meters are shown in Figure 8 and 9, respectively. Like the trends for the 0-700 meter data, the South Atlantic and Indian Ocean are the only basins with significantly positive linear trends for the 0-2000 meter Ocean Heat Content data. And also like the trends 0-700 meter data, the linear trends of the 0-2000 meter Ocean Heat Content anomalies in the North Atlantic and South Pacific are negative. The linear trends for those two ocean basins are less negative for 0-2000 meter depths than they are for 0-700 meter depths, indicating that the declines at depths of 0-700 meters are greater than the increases at the 700-2000 meter depths. Considering there is less than a decade of ARGO-era data with “full” coverage, there is no need to speculate about the cause. Note also that the trend for the North Pacific OHC anomalies is basically flat for the 0-2000 meter data, and that the same holds true for the 0-700 meter data.

Figure 8

#########################

Figure 9

CLOSING COMMENTS

The undocumented (as of this writing) NODC 0-2000 meter Ocean Heat Content dataset appears as though it was prepared to show that Global Ocean Heat Content continues to rise during the ARGO era, and that it is intended to counter the argument that Global Ocean Heat Content has flattened during the ARGO era as shown in the NODC 0-700 meter dataset.

Due to the extremely limited number of observations at depths of 1000-5000 meters (shown in Figure 3 and in the animations), the 0-2000 meter Ocean Heat Content dataset should be used with great caution. It appears to me to be an ARGO-era 0-2000 meter Ocean Heat Content dataset spliced onto a long-term 0-700 meter dataset. For this reason, I, personally, would not expend the effort to analyze the long-term (pre-ARGO era) 0-2000 meter NODC OHC data beyond what has been presented in this post.

Each time I see the claim (based on many assumptions) by anthropogenic global warming proponent scientists that the rise in ocean heat content at depth “will come back to haunt us” I wonder why those same scientists have not bothered to attempt to document how much of the rise in OHC from the 1970s to the early 2000s (0-700 meters) was caused by the deep oceans upwelling warmer anomalies from past decades, other than the fact that there’s no data for them to do so. Could they believe that multidecadal variability is limited to Sea Surface Temperatures and does not impact temperatures at depth? Or is their intent to have the unsuspecting public believe it?

Advertisements

About Bob Tisdale

Research interest: the long-term aftereffects of El Niño and La Nina events on global sea surface temperature and ocean heat content. Author of the ebook Who Turned on the Heat? and regular contributor at WattsUpWithThat.
This entry was posted in Ocean Heat Content Problems, OHC Update. Bookmark the permalink.

24 Responses to Introduction To The NODC Ocean Heat Content Anomaly Data For Depths Of 0-2000 Meters

  1. Sundance says:

    @ – “Due to the extremely limited number of observations at depths of 1000-5000 meters (shown in Figure 3 and in the animations), the 0-2000 meter Ocean Heat Content dataset should be used with great caution.”

    As always you do an excellent job in your anaysis and ask great questions in challenging the NODC dataset. Will they draw attention to the limited pre-2000 data and note that 0-2000 meter OHC dataset should be used with caution? Call me cynical but I see caution becoming a victim of political expediency that necessitates that the “missing heat” be found even if said heat is an artifact of poor and poorly merged OHC data. 🙂 I’m willing to bet money that the 0-2000 meter OHC dataset will be touted as the smoking gun for the discovery of the “missing heat” because it is politics rather than science that drives climate science now and it is now more important politically to promote the IPCC consensus than it is to question data integrity and ensure the science is robust. I hope I’m wrong.

  2. cthulhu says:

    Surely the 0-2000m data show an increase in OHC since 2003 irrespective of whether the record is accurate pre-2000 (I don’t think it is, I reckon it has hardly any 700-2000m data)

    I am far from confident that 2003-present is long enough a time period to analyze trends, especially as the period covers an impressive solar cycle decline and ENSO events haven’t been distributed evenly since 2003.

    But if 2003-present is long enough to conclude 0-700m has no increase in heat, then it seems to me that it would have to be long enough to conclude 0-2000m has had an increase in heat.

    I don’t buy the idea that the 0-2000m has been faked by scientists to show heat gain. If they wanted that so bad why didn’t/don’t they just fake heat gain in the 0-700m data?

  3. Bob Tisdale says:

    cthulhu says: “I don’t buy the idea that the 0-2000m has been faked by scientists to show heat gain. If they wanted that so bad why didn’t/don’t they just fake heat gain in the 0-700m data?”

    I don’t believe I have ever stated or implied that scientists falsify Ocean Heat Content data to show a heat gain.

  4. Sean says:

    In response to Roger Pielke Sr.’s supposition that total ocean heat content is the best measure of how the planet is heating up, Gavin Schmidt said that looking at the increase in sea level is probably a better indicator since the steric heating accounts for much of the change as well as melting at the poles. How does one reconcile the sea level data with the 0-2000 meter heat content data if global sea level changes have been slowing?

  5. Pingback: Hiding declines…or clutching at straws | pindanpost

  6. Bob Tisdale says:

    Sean says: “How does one reconcile the sea level data with the 0-2000 meter heat content data if global sea level changes have been slowing?”

    Assuming your question was not rhetorical: We can’t reconcile it. There is little to no OHC data at depths of 1000-2000 meters before the ARGO era, so we don’t know if the 0-2000 meter OHC data has also slowed.

  7. Michael Twomey says:

    Bob,

    First: Thanks for your work on this website. It is a great educational service and much appreciated.

    Second, a question: Is the reconstruction of the NODC OHC data in the pre-ARGO era in any way dependent upon the NCDC atmospheric data? I realize that the former is oceanic and the latter is atmospheric, and I realize that ocean temperatures should be a significant driver of atmopsheric temperatures and not vice versa. Nonetheless, I ask because I wonder whether NODC has reconstructed the OHC to be consistent with the atmospheric data. (I’m not suggesting bad faith on their part, just wondering if the datasets are truly independent.)

    Michael Twomey

  8. Bob Tisdale says:

    Michael Twomey says: “Is the reconstruction of the NODC OHC data in the pre-ARGO era in any way dependent upon the NCDC atmospheric data?”

    Nope. The OHC data is based on surface and subsurface temperature and salinity observations.

  9. Bob says:

    You know how they have volume vs. extent for sea ice, it would be interesting to do the same for ssta’s. Like having a single contour line separating above and below average, then calculating the area. Then have a time series going back to ’79.

    Then it’s possible you might have net warming, even though there is a greater area of cooler water.
    Have you done this bob?

  10. Bob Tisdale says:

    Bob, I’m not sure what you’re asking. You’re discussing Sea Surface Temperature anomalies (ssta) on an Ocean Heat Content thread, which causes some of my confusion. And I primarily use a spatially complete, satellite-based Sea Surface Temperature dataset (Reynolds OI.v2) for most of my sea surface temperature anomaly posts.

    Please clarify your question.

  11. Bob Tisdale says:

    Bob, on the following link, select “monoiv2.ctl Monthly OIv2 SST ” and “time series” , then click on “next page”:
    http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?lite=
    In the first drop-down menu, select “ssta OIv2 SST monthly anomaly (C) rel to 1971-2000”. The default coordinates are set for global data. So select “Plot”. Your output is a time series graph:
    http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?ctlfile=monoiv2.ctl&ptype=ts&var=ssta&level=1&month=nov&year=1981&fmonth=sep&fyear=2011&lat0=-90&lat1=90&lon0=-180&lon1=180&plotsize=800×600&dir=

    Is that what you were looking for?

  12. Bob says:

    No, I’ll try again (please note, what I’m looking for is probably a stupid idea!)

    Take for example this plot

    http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?ctlfile=monoiv2.ctl&ptype=map&var=ssta&level=1&op1=none&op2=none&month=sep&year=2011&proj=default&lon0=280&dlon=50&lat0=-60&dlat=60&type=shaded_contour&cint=10&white=def&plotsize=800×600&title=&dir=

    What is the sum of the blue and red area? Or What is the magnitude of the red area?
    For example, lets say all the red equals 60% of the sea surface, and the blue then must equal 40%. Therefore for september the ocean area anomaly would be positive 10%.

    So I don’t care if an area is .1 or .2 or even 5 degrees above average in different parts of the plot, I just want the total area of anything above zero……in a time series.

    It’s cool if you can’t help me, and if it cannot be done easily then I wouldn’t bother trying it myself. It would be interesting to know what would happen to the area of (ssta > 0 deg C) during ENSO.

    Is the area of warm ocean increasing? Probably! but would be nice to have data rather than just guessing.

  13. Bob Tisdale says:

    Thanks for the clarification, Bob. I have no way of performing the analysis you’re discussing. And the problem I foresee is where to set the zero for anomalies. Graphically, changing the base years for temperature anomalies only shifts where the data intersects with zero. The curve is basically the same regardless of base years. But wouldn’t the perspective of what you’re trying to do be skewed by where you elect to set the zero?

  14. Bob says:

    Yes the curve is the same, like this:

    So you would just set it to “ssta OIv2 SST monthly anomaly (C) rel to 1971-2000″

    It is a massive filter and no doubt it would destroy the data. But what is the underlying structure of the warming area? Just because the monthly anomaly is positive, like last month september equals 0.13, does that mean there is more ‘warm water surface area’? Maybe!
    Yes, there is more ‘warm water volume’, but it is not the same.

    You never know, maybe we’ll discover that warm water area is decreasing!

    Although if you compare to september 1982

    http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?ctlfile=monoiv2.ctl&ptype=map&var=ssta&level=1&op1=none&op2=none&month=sep&year=1982&proj=default&lon0=280&dlon=50&lat0=-60&dlat=60&type=shaded&cint=100&white=def&plotsize=800×600&title=&dir=

    with 2002

    http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?ctlfile=monoiv2.ctl&ptype=map&var=ssta&level=1&op1=none&op2=none&month=sep&year=2002&proj=default&lon0=280&dlon=50&lat0=-60&dlat=60&type=shaded&cint=100&white=def&plotsize=800×600&title=&dir=

    It’s likely to be a positive trend…

  15. George says:

    If the 2000 meter data are increasing in temperature and the 700 meter data are decreasing, does this mean the ocean is being heated from below?

    (just sort of kidding but the thing is, I can’t think of an atmospheric condition that would cause the oceans from 700-2000 meters to heat while the ocean from 0-700 cooled unless we are seeing some sort of churning going on here that we haven’t seen before.)

  16. Bob Tisdale says:

    George: The process that comes to mind is Meridional Overturning Circulation, in which waters nearer the poles are subducted then upwell decades later toward the equator. I assume, therefore, for these conditions to exist we’re seeing warmer-than-normal waters being subducted to the 700-2000 meter depths faster than they are being replaced in the 0-700 meter depths.

  17. Pingback: E-Mail Exchange With Josh Willis On The Ability To Monitor The Transfer Of Heat In The Oceans To Levels Below 700m | Climate Science: Roger Pielke Sr.

  18. Pingback: Missing gif Animations | Bob Tisdale – Climate Observations

  19. tallbloke says:

    Hi Bob, why no southern ocean ARGO data?

    Thanks

    TB

  20. Bob Tisdale says:

    tallbloke: This data used to be available from the NODC website. They didn’t have a separate breakout for the Southern Ocean. It assumedly is included as parts of the other Southern Hemisphere data.

  21. tallbloke says:

    Thanks Bob, so it’s subsumed in south pacific and south atlantic?

  22. Pingback: NODC’s Pentadal Ocean Heat Content 0 to 2000m Creates Warming That Doesn’t Exist in the Annual Data – A Lot of Warming | Bob Tisdale – Climate Observations

  23. Pingback: NODC’s Pentadal Ocean Heat Content (0 to 2000m) Creates Warming That Doesn’t Exist in the Annual Data – A Lot of Warming | Watts Up With That?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s