Why I Remain Uncertain about Global Warming and Climate Change — Part 3 of 4

This is Part 3 of 4 parts, continuing my elaboration of why I remain uncertain about the science behind the theory of global warming and climate change.

What are the warming metrics for water vapor?

Good question. I couldn’t find them.

I suspect that one reason they aren’t given is that the lifetime of water vapor is considered so short, as in days, that the calculated values of GWP and GTP would be zero or very small.

But that completely fails to take into account that the raw amount of water vapor in the atmosphere, the 0.4% fraction, is continuously having an impact on the greenhouse effect.

Or, maybe that’s the point, that global warming is attempting to measure the impact beyond the natural greenhouse effect, to capture only the anthropogenic greenhouse effect.

But even then, the simple (and accepted) fact is that if any other greenhouse gas does cause global temperature to rise, more water vapor will be induced into the atmosphere, causing additional warming which is due to… water vapor.

Whatever, the scientists have done a very poor job of articulating this aspect of the greenhouse effect and global warming.

I’ll need to see a significant improvement in the scientific coverage of this aspect of global warming potential, especially from the IPCC, before I could begin to have significant confidence in the theory of global warming and climate change.

No precise measurements of water vapor in atmosphere?

As far as I can tell, there is no precise measurement or even vaguely accurate estimate of how much water vapor is in the atmosphere.

Admittedly, a key problem is that water vapor is so highly variable based on location, humidity, precipitation, and evaporation.

The closest I can find to even a rough estimate is 0.4% from Wikipedia:

By volume, dry air contains 78.09% nitrogen, 20.95% oxygen,[1] 0.93% argon, 0.04% carbon dioxide, and small amounts of other gases. Air also contains a variable amount of water vapor, on average around 1% at sea level, and 0.4% over the entire atmosphere.

The Wikipedia provides another estimate of 10 to 50,000 parts per million, noting that “Water vapor strongly varies locally.” 50,000 parts per million would be 5%.

It is disappointing that even the IPCC doesn’t provide this data. Or NOAA.

Especially since the scientists have already told us that water vapor itself is a greenhouse gas and is the dominant greenhouse gas, driving the bulk of the greenhouse effect, by a factor of two and a half to three.

I need my concern addressed before I can have significant confidence in the theory of global warming and climate change.

Why doesn’t NOAA measure carbon dioxide at Mauna Loa?

The NOAA Earth System Research Laboratory at Mauna Loa volcano in Hawaii measures carbon dioxide in the air, but curiously they don’t measure water vapor at the same time.

In fact, their procedure for measuring carbon dioxide first removes all of the water vapor so that the carbon dioxide measurement is for dry air. As their web site says:

Data are reported as a dry air mole fraction defined as the number of molecules of carbon dioxide divided by the number of all molecules in air, including CO2 itself, after water vapor has been removed. The mole fraction is expressed as parts per million (ppm). Example: 0.000400 is expressed as 400 ppm.

It seems perfectly reasonable to me that they should report the water vapor mole fraction at the same time. Granted, it may be noisy data, but it might tell us something, at least over an extended period of time.

If there is some great reason not to do this, I would like to hear it.

Either way, I need my concern addressed before I can have a significant confidence in the theory of global warming and climate change.

What is the global warming impact of carbon dioxide vs. water vapor?

I haven’t been able to find a clear-cut answer to the question of the temperature impact of water vapor relative to carbon dioxide.

GWP and GWT are relative to carbon dioxide but the tables don’t list water vapor since it isn’t considered an anthropogenic greenhouse gas even though it is the dominant greenhouse gas.

There are two distinct questions:

  1. What is the total, overall greenhouse effect for a given greenhouse gas.
  2. What is the incremental greenhouse effect for a molecule (or mole or other unit) of a given greenhouse gas.

The former is the focus here.

From the Wikipedia I get the overall fraction of the greenhouse effect for various greenhouse gases:

  1. Water vapor and clouds: 36–72%
  2. Carbon dioxide: 9–26%
  3. Methane: 4–9%
  4. Ozone: 3–7%

A box on that Wikipedia page gives this analysis as well from a linked paper:

Schmidt et al. (2010) analysed how individual components of the atmosphere contribute to the total greenhouse effect. They estimated that water vapor accounts for about 50% of Earth’s greenhouse effect, with clouds contributing 25%, carbon dioxide 20%, and the minor greenhouse gases and aerosols accounting for the remaining 5%. In the study, the reference model atmosphere is for 1980 conditions.

The midpoint of 36–72% for water vapor and clouds is 49%.

20% for carbon dioxide is 40% of the 50% for water vapor.

Alternatively, the 50% for water vapor is 2.5 times the 20% for carbon dioxide.

That’s for the total, overall impact, but none of this tells us what the relative, instantaneous global warming impact is of a water molecule compared to a molecule of carbon dioxide, methane, etc.

The 2.5 times factor is consistent with what the IPCC says in FAQ 8.1 in Chapter 8 of the AR5 Physical Science Basis assessment report:

although CO2 is the main anthropogenic control knob on climate, water vapour is a strong and fast feedback that amplifies any initial forcing by a typical factor between two and three. Water vapour is not a significant initial forcing, but is nevertheless a fundamental agent of climate change.

That’s a curious distinction that IPCC feels a need to draw for the role of water vapor:

  • significant initial forcing vs.
  • fundamental agent of climate change.

I find it so curious that they take the trouble to explain the important role of water vapor but simultaneously struggle to avoid giving that role top billing.

The net effect is to severely constrain my own ability to give any significant credibility to the theory of global warming and climate change. I need to see significant improvement in the science on this front.

Fraction of atmosphere for water vapor vs. carbon dioxide

As per the wikipedia, carbon dioxide comprise about 0.04% of the atmosphere — roughly 400 parts per million.

Water vapor is much more dynamic, varying from near zero in arid regions to 5% in very humid locales, but with a global average of about 0.4%.

So, water vapor is about ten times more prevalent in the atmosphere than carbon dioxide.

Clouds

Clouds are not water vapor per se, but condensed water vapor which forms small droplets of liquid water.

These droplets are what is known as an aerosol.

You can see the aerosol droplets, which is why we can see clouds.

Water vapor will also be present in clouds, but in the form of a gas that cannot be seen with the unaided human eye.

Do clouds make global warming better or worse?

Do clouds reflect sunlight and cool the planet, or do they trap heat and warm the planet? The various answers — from science:

  1. Sometimes they cool.
  2. Sometimes they warm.
  3. Unclear.
  4. Net impact is unknown.

A sample of scientific opinion as to the uncertainty over the net impact of clouds on global warming:

  1. https://www.nsf.gov/news/special_reports/clouds/question.jsp
  2. https://isccp.giss.nasa.gov/role.html
  3. https://www.giss.nasa.gov/research/briefs/delgenio_03/
  4. https://www.skepticalscience.com/clouds-negative-feedback.htm
  5. http://e360.yale.edu/features/investigating-the-enigma-of-clouds-and-climate-change
  6. https://en.wikipedia.org/wiki/Cloud_feedback

Here’s a summary of the “complicated system of climate feedbacks in which clouds modulate Earth’s radiation and water balances” from NASA’s Goddard Institute of Space Studies (GISS):

  • Clouds cool Earth’s surface by reflecting incoming sunlight.
  • Clouds warm Earth’s surface by absorbing heat emitted from the surface and re-radiating it back down toward the surface.
  • Clouds warm or cool Earth’s atmosphere by absorbing heat emitted from the surface and radiating it to space.
  • Clouds warm and dry Earth’s atmosphere and supply water to the surface by forming precipitation.
  • Clouds are themselves created by the motions of the atmosphere that are caused by the warming or cooling of radiation and precipitation.
  • https://isccp.giss.nasa.gov/role.html

The GISS folks go on to say:

If the climate should change, then clouds would also change, altering all of the effects listed above. What is important is the sum of all these separate effects, the net radiative cooling or warming effect of all clouds on Earth. For example, if Earth’s climate should warm due to the greenhouse effect , the weather patterns and the associated clouds would change; but it is not known whether the resulting cloud changes would diminish the warming (a negative feedback) or enhance the warming (a positive feedback). Moreover, it is not known whether these cloud changes would involve increased or decreased precipitation and water supplies in particular regions. Improving our understanding of the role of clouds in climate is crucial to understanding the effects of global warming.

In short, the atmosphere, land, and bodies of water form a complex adaptive system (CAS) with such complex feedback loops that even the best scientists are unable to provide a clear answer as to whether clouds are a help or a hindrance on the global warming front.

My own read: Clouds are a key part of the checks and balances system of the planet which enable us to thrive despite any dramatic changes in climate. But that’s just my opinion.

And this comment from the GISS folks is quite disheartening:

The global climate is such a complex system that no one knows how even a small increase in temperature will alter other aspects of climate or how such alterations will influence the rate of warming. Moreover, changes in any of these climatic features may also affect the distribution and properties of clouds , but the understanding of clouds is so rudimentary that no one knows whether climate feedbacks involving clouds will dampen or amplify a warming trend. The possibility that clouds might accelerate global warming brings a special urgency to the ancient problem of understanding the climatic importance of clouds.

Those two statements are worth special emphasis:

  1. The global climate is such a complex system that no one knows how even a small increase in temperature will alter other aspects of climate or how such alterations will influence the rate of warming.
  2. the understanding of clouds is so rudimentary that no one knows whether climate feedbacks involving clouds will dampen or amplify a warming trend.

The folks at NASA, NOAA, and IPCC have some real explaining to do if they want to give me any significant confidence in the theory of global warming and climate change.

The article list above entitled “Investigating the Enigma of Clouds and Climate Change” with the subtitle “As climate scientists attempt to forecast the future pace of global warming, one of the more complex variables has proven to be the interrelation of clouds and climate change. In an interview with Yale Environment 360, physicist Kate Marvel discusses the double-edged effect clouds have on rising temperatures” leads with the following disappointing lack of clear answers:

Clouds perform an important function in cooling the planet as they reflect solar energy back into space. Yet clouds also intensify warming by trapping the planet’s heat and radiating it back to earth. As fossil fuel emissions continue to warm the planet, how will this dual role played by clouds change, and will clouds ultimately exacerbate or moderate global warming?

Kate Marvel, a physicist at Columbia University and a researcher at NASA’s Goddard Institute for Space Studies, is investigating the mysteries of clouds and climate change. And while she and her colleagues would like to offer definitive answers on this subject, the fact is that few now exist.

How anybody can be so confident about any theory of global warming and climate change in the face of such dramatic uncertainty in such an important area is a deep mystery to me.

Concrete (cement) production

Not many people realize it, but production of concrete (cement) is a significant source of global carbon dioxide emissions. Beyond simply being energy intensive, cement production involves a chemical reaction which directly releases carbon dioxide. This is distinct from the chemical reaction that occurs when wet concrete sets (hydration and curing.)

Section 6.3.2.1 on page 489 of chapter 6 of the IPCC AR5 Scientific Basis assessment report gives a number of 4% for carbon dioxide emissions from cement production in 2000 through 2009. Besides being energy-intensive (heat needed), production of cement results in emission of carbon dioxide by the calcination of limestone according to the Intergovernmental Panel on Climate Change (IPCC), the official UN body pursuing global warming and climate change. So, even if fossil-fuel usage for energy was completely eliminated, cement production would still be a concern.

To be fair, the IPCC assessment reports do in fact acknowledge cement production as being a contributor to global atmospheric carbon dioxide in addition to fossil fuel combustion.

Again, even 100% elimination of fossil fuel combustion for energy will not eliminate all anthropogenic sources of carbon dioxide.

Land use

I tentatively accept that land use can impact the amount of carbon dioxide that remains in the atmosphere.

Plants absorb carbon dioxide through photosynthesis, so any land use practices which reduce the global net amount of land covered by plants will reduce the capacity of the planet to absorb any carbon dioxide that is emitted into the air from combustion of fossil fuels, biofuels, and other hydrocarbons, and cement production and natural gas flaring, having the effect of a net increase of greenhouse gases in the atmosphere.

I won’t discuss land use here in any greater detail, not because it doesn’t matter, but because it doesn’t relate to the central matter of this paper which is the extent to which human-generated carbon dioxide is the dominant cause of global warming.

Ocean absorption of carbon dioxide

I tentatively accept that the oceans absorb some amount of carbon dioxide from the atmosphere.

Exactly how much the oceans absorb, at what rate, and what exactly happens it it chemically in the seawater is a more complex matter for which I do not possess either deep knowledge or a strong view.

Nor do I have a position on the degree to which carbon dioxide might reenter the atmosphere from the oceans.

Where exactly all of this leaves us with regard to what fraction of carbon dioxide emissions contribute specifically to the greenhouse effect is a matter beyond my current knowledge.

My net position is that the climate scientists haven’t articulated a clear enough model of what is really going on, the IPCC Physical Science Basis assessment reports notwithstanding. There are still too many gaps and uncertainties. And still far too much speculative hand-waving.

I am not claiming that they are wrong, but I cannot claim that I am persuaded by their narrative.

In short, maybe they have a valid theory, but maybe not.

I cannot accept any belief or rationale for which I cannot feel comfortable with the evidence, reasoning, or rationale that is either available and accessible to me or provided to me.

No reliable global temperature record

I’ll elaborate my concerns in the sections that follow, but to make a long story short, my single biggest concern over the theory of global warming and climate changes is the simple fact that we don’t have a reliable record for global temperature. Without a reliable record for global temperature, any theory that depends on global temperature will be of dubious quality at best.

Scientists gather discrete temperature measurements from a variety of sources and cobble them together in a veritable crazy patchwork quilt, otherwise called a model, and after significant massaging of the many data series they manage to come up with a single number purporting to be the global temperature of the entire planet.

I am skeptical of this process, to say the least.

Who knows, maybe in the final analysis it all works out, but from where I sit, based on publicly-accessible data and reports, I don’t have a high confidence in the resulting data — or any theory based on it.

That doesn’t mean that the theory is necessarily false or that I personally believe it is false, but simply that I cannot have very high confidence in it at the present time.

Global temperature anomalies

Climate scientists prefer to speak of temperature anomalies rather than absolute temperatures. Mostly this seems to be so that temperature increases around the world can be compared, regardless of the differences in temperature of different regions of the world, or even the differences between valleys and mountains in a specific locale.

They typically choose a baseline period of time, like the 20th century, the past 30 years, or the 30 years between two designated decades, like 1961–1990, 1971–2000, or 1981–2010. Then they calculate the average temperature for each location over that baseline period.

Temperature anomalies are then calculated as the difference between actual, absolute temperature at each location at a particular time minus the baseline temperature for that location.

That’s my understanding, which I have synthesized from my readings. I wasn’t able to find a specified reference in the IPCC assessment reports that detailed the process as I described here.

NOAA has a web page for Anomalies vs. Temperature, which says:

In climate change studies, temperature anomalies are more important than absolute temperature. A temperature anomaly is the difference from an average, or baseline, temperature. The baseline temperature is typically computed by averaging 30 or more years of temperature data. A positive anomaly indicates the observed temperature was warmer than the baseline, while a negative anomaly indicates the observed temperature was cooler than the baseline. When calculating an average of absolute temperatures, things like station location or elevation will have an effect on the data (ex. higher elevations tend to be cooler than lower elevations and urban areas tend to be warmer than rural areas). However, when looking at anomalies, those factors are less critical. For example, a summer month over an area may be cooler than average, both at a mountain top and in a nearby valley, but the absolute temperatures will be quite different at the two locations.

Using anomalies also helps minimize problems when stations are added, removed, or missing from the monitoring network. The above diagram shows absolute temperatures (lines) for five neighboring stations, with the 2008 anomalies as symbols. Notice how all of the anomalies fit into a tiny range when compared to the absolute temperatures. Even if one station were removed from the record, the average anomaly would not change significantly, but the overall average temperature could change significantly depending on which station dropped out of the record. For example, if the coolest station (Mt. Mitchell) were removed from the record, the average absolute temperature would become significantly warmer. However, because its anomaly is similar to the neighboring stations, the average anomaly would change much less.

The NOAA State of the Climate monthly reports use the term 20th century average, but doesn’t precisely define the term.

The NOAA temperature Time Series web page, which allows you to display temperature data from any specified period of years says this:

Please note, Global and hemispheric anomalies are with respect to the 20th century average. Continental anomalies are with respect to the 1910 to 2000 average.

So, my personal inference is that 20th century average refers to 1900 to 2000, or possibly 1901 to 2000, with the exception as detailed above that “Continental anomalies are with respect to the 1910 to 2000 average.” Although I wonder if the starting year there is truly 1910 or 1911. It’s really hard to say. I really have a strong aversion to a lack of specificity, lack of precision, and lack of detail in general.

More detail and precision will be needed in order for me to have a greater sense of confidence in global temperature data.

Global mean surface temperature (GMST)

Although scientists prefer to report temperature anomalies, the IPCC refers frequently to global mean surface temperature (GMST). As far as I can tell, this is more of an abstract, conceptual reference rather than a reference to actual temperature data. When the actual data is referenced, it is always as a temperature anomaly.

From the IPCC glossary:

Global mean surface temperature — An estimate of the global mean surface air temperature. However, for changes over time, only anomalies, as departures from a climatology, are used, most commonly based on the area-weighted global average of the sea surface temperature anomaly and land surface air temperature anomaly.

There is a distinct lack of clarity as to what the mean is the mean of. Is it highest daily temperature, average daily temperature, mean of temperature as measured periodically during the day, or what?

More specificity will be needed in order for me to have a greater sense of confidence in global temperature data.

What is the accuracy of the baseline average used for global temperature anomalies?

NOAA reports the margin of error for global temperature anomalies, but they don’t report the accuracy of the baseline temperature average that is used when calculating the anomalies.

I would like to see more specificity as to all three margins of error:

  1. The margin of error for the baseline average. Including math for how that margin was calculated from all of the individual annual (or monthly?) margins of error over the baseline period.
  2. The margin of error for the for the absolute temperatures of the year or month or period being reported.
  3. The combined margin of error for the anomaly of the year or month or period being reported.

I need to see all three error bars.

And as mentioned in earlier in the section on global temperature anomalies, greater clarity is needed as to precisely what years (or months) are included in the baseline time period.

The IPCC AR5 Technical Summary makes this statement:

Relative to the 1961–1990 mean, the GMST anomaly has been positive and larger than 0.25°C since 2001.

But they fail to give the margin of error for the referenced baseline period.

They also use mean rather than average for that baseline period. Or maybe they simply meant to say average mean. Difficult to say.

I also need to see some discussion of how the margin of error for the pre-1958 portion of the 20th century impacts the overall margin of error for baseline period. And how the post-1958 margin of error compares to the pre-1958 margin of error.

I need to see the full list of error bars across the full length of the baseline period.

More specificity will be needed in order for me to have a greater sense of confidence in global temperature data.

The historic temperature record changes when the baseline period is revised

Normally, one would presume that historic data is just that, an immutable (unchanging) record of the past, but with temperature data it gets complicated since temperatures are reported as anomalies or differences from a baseline time period. That would be fine if the baseline period was stable, but scientists like to shift to a new baseline period as the decades pass by.

Each time the baseline period is revised, that means the historic record of temperature anomalies must be updated using the revised baseline in the re-calculation of the anomalies.

In the 1990’s the baseline period by definition could end no later than 1990.

But once we’re in the 21st century it was appealing to use a revised baseline period that ended in 2000.

NOAA appears to be using “the 20th century average” for the baseline at present. Obviously, in the 1990’s they used an older baseline period.

Some climate reports use a 30-year baseline period, so that ten years ago they used the 1971 to 2000 period, but now they use the 1981 to 2010 period. Five to ten years from now they will shift to the 1991 to 2020 period, adjusting the reported temperature anomaly data for the 1991 to 2020 period in the process.

To be clear, the actual sensor data (instrument readings) won’t have changed, just the anomaly calculations.

Dubious quality of measures of earth temperature

I’ve previously cited the NOAA data for temperature of the climate.

There may be other global temperature datasets, but the three cited by the IPCC are:

  1. Hadley Centre/Climatic Research Unit — HadCRUT4.
  2. NOAA Merged Land–Ocean Surface Temperature Analysis — MLOST.
  3. NASA Goddard Institute for Space Studies Surface Temperature Analysis — GISTEMP.

The Hadley Centre/Climatic Research Unit (HadCRUT4) dataset, from the UK:

The NOAA Merged Land–Ocean Surface Temperature Analysis (MLOST) global temperature dataset used by the IPCC:

But as of May 2015, NOAA has switched to the Merged Land Ocean Global Surface Temperature Analysis (NOAAGlobalTemp) dataset:

As per NOAA:

Global Surface Temperature Data Transition

Effective with this (May 2015) monthly climate report, NCEI transitions to updated versions of its land and ocean surface temperature datasets. When combined, these merged datasets are now known as NOAAGlobalTemp (formerly MLOST).

We’ll have to wait until IPCC issues their next assessment report, AR6, in 2022 before IPCC uses this updated dataset.

The GISTEMP global temperature dataset from NASA’s Goddard Institute for Space Science (GISS):

As I’ve noted earlier, scientists don’t report absolute temperature, but a difference from a baseline temperature and they call this the temperature anomaly.

That’s all well and good, but I have a lot of concerns.

The basic methodology is to take a relatively small number of actual temperature measurements (okay, thousands, but small relative to the millions of square miles of the surface of the planet), called station data, and then feed them into a mathematical model to calculate a single, discrete temperature for the entire climate. I’m deeply skeptical about the accuracy that is claimed for such a result.

For the most part I will concentrate my attention on the NOAA NOAAGlobalTemp dataset, although I expect that my questions and concerns are the same with any other global temperature dataset.

NOAA calculates temperature in several parts before combining them into a single value:

  1. Northern hemisphere land.
  2. Northern hemisphere ocean.
  3. Southern hemisphere land.
  4. Southern hemisphere ocean.
  5. Northern hemisphere land and ocean.
  6. Southern hemisphere land and ocean.
  7. Global land.
  8. Global ocean.
  9. Global land and ocean. The single temperature of the entire planet.

NASA GISTEMP does not drill down to that level of detail.

NOAA also has calculations (models) for:

  1. Africa
  2. Asia
  3. Europe
  4. North America
  5. Oceana
  6. South America

I personally check the Arctic temperature reported by the Danish Meteorological Institute, but they don’t combine the daily measurements into a single average for the year:

I personally haven’t poked around enough to find Antarctic temperature data (actually, I did find some in the Antarctic Temperature section), but I strongly suspect that it does exist. Kind of odd that NOAA doesn’t have it.

My concerns:

  1. Dubious precision of global temperature.
  2. Primarily the output of complex models rather than actual measurement.
  3. Too few actual measurements fed into models. Not enough weather stations and ocean data buoys.
  4. Too much of Earth not measured at all. See subsequent section.
  5. No empirical validation possible for the resulting estimate of temperature of the planet as a whole.
  6. Dubious statements such as “missing data filled in by statistical methods” when the claim is that global warming is “settled” and “beyond dispute.”
  7. Too much of a hodgepodge of measurement methods — land weather stations, ocean data buoys, satellite remote sensing, some ships — and many gaps. No reliable way to combine those disparate measures. No way to empirically validate the combined data. Lots of judgment required, which may be valid, but who knows for sure?
  8. Dubious quality of older temperature data. Long before modern data sensors. Long before NOAA data buoys (1970’s.)
  9. Dubious to compare older data with modern data. Differences in methodologies. Differences in sensors.
  10. Are seasonal temperatures affected equally? I know Arctic summers are not significantly warmer in summer since 1958, but are significantly warmer in winter.
  11. Confusion over conflicting assessments of the so-called “hiatus” or “stalling” or “pause” of the warming trend between 1998 and 2012.
  12. Change in modeling methodology in 2015. Difficult to determine what the true motive was, which raises suspicion that it seemed motivated by a desire to disprove (or cover up?) the alleged hiatus of warming from 1998 to 2012. More details on the hiatus in coming sections.
  13. Yet another set of updates to the NOAA Extended Reconstructed Sea Surface Temperature (ERSSTv5) dataset in July 2017. This is part of the data that goes into the data that produces the NOAAGlobalTemp dataset. Previous updates to ERSST in May 2015. This pace of data and methodology updates is disconcerting to me for something that claims to be “settled science.”

I really need to see all of my questions and concerns addressed fully and to my satisfaction before I can have any significant confidence in the science of global warming and climate change.

NOAA Merged Land-Ocean Surface Temperature Analysis (MLOST)

MLOST is the global temperature dataset (what I personally call a model) that NOAA used prior to May 2015. They have now upgraded to a new global temperature dataset (model) called NOAAGlobalTemp.

MLOST is still very significant since it was the current NOAA temperature model when the most recent IPCC assessment report, AR5, was produced in 2013. I presume that the next, sixth assessment report, AR6, will switch to NOAAGlobalTemp, but that’s only a presumption on my part.

From NOAA:

The Merged Land–Ocean Surface Temperature Analysis (MLOST) is a spatially gridded (5º x 5º) global surface temperature dataset, with monthly resolution from January 1880 to present. NCEI combines a global sea surface (water) temperature (SST) dataset with a global land surface air temperature dataset into this merged dataset of both the Earth’s land and ocean surface temperatures. The SST dataset is the Extended Reconstructed Sea Surface Temperature (ERSST) version 3b. The land surface air temperature dataset is similar to ERSST, but uses data from the Global Historical Climatology Network Monthly (GHCNM) database.

NCEI provides the MLOST dataset as temperature anomalies, relative to a 1971–2000 monthly climatology, following the World Meteorological Organization convention. The MLOST anomalies and error fields are available from NCEI’s FTP area and Climate Monitoring group.

It uses 1971–2000 as the 30-year baseline period to calculate the temperature average on which temperature anomalies are calculated.

I provide this information here for convenient reference, completeness, and to indicate that I am aware of it.

NOAA Merged Land Ocean Global Surface Temperature Analysis Dataset (NOAAGlobalTemp)

NOAAGlobalTemp is the global temperature dataset (what I call a model) that NOAA has used since May 2015. Previously they used the MLOST dataset.

Even though NOAAGlobalTemp is most current, MLOST is still very significant since it was the current NOAA temperature model when the most recent IPCC assessment report, AR5, was produced in 2013. I presume that the next, sixth assessment report, AR6, will switch to NOAAGlobalTemp, but that’s only a presumption on my part.

From NOAA:

The NOAA Merged Land Ocean Global Surface Temperature Analysis Dataset (NOAAGlobalTemp) is a merged land–ocean surface temperature analysis (formerly known as MLOST) (link is external). It is a spatially gridded (5° × 5°) global surface temperature dataset, with monthly resolution from January 1880 to present. We combine a global sea surface (water) temperature (SST) dataset with a global land surface air temperature dataset into this merged dataset of both the Earth’s land and ocean surface temperatures, currently as version v4.0.1. The SST dataset is the Extended Reconstructed Sea Surface Temperature (ERSST) version 4.0. The land surface air temperature dataset is similar to ERSST but uses data from the Global Historical Climatology Network Monthly (GHCN-M) database, version 3.3.0.

We provide the NOAAGlobalTemp dataset as temperature anomalies, relative to a 1971–2000 monthly climatology, following the World Meteorological Organization convention. The anomalies and error fields are available from the FTP area and the Global Temperature and Precipitation Maps page.

A peer-reviewed paper, “Improvements to NOAA’s Historical Merged Land–Ocean Surface Temperature Analysis (1880–2006),” describes the NOAAGlobalTemp processing principles and procedures. The current version (NOAAGlobalTemp v4.0.1) uses ERSST v4.0, which includes only in situ SST data, for historical consistency. Data for the previous month are now available on the third day of the current month.

It continues to use 1971 to 2000 as the baseline period to calculate the temperature average on which temperature anomalies are calculated, so they should still be at least somewhat comparable to the previous anomalies calculated for the MLOST temperature dataset (model.)

I provide this information here for convenient reference, completeness, and to indicate that I am aware of it.

NASA GISS Surface Temperature Analysis (GISTEMP)

NASA’s Goddard Institute for Space Studies (GISS) produces the GISTEMP global temperature dataset (what I call a model):

The GISS Surface Temperature Analysis (GISTEMP) is an estimate of global surface temperature change. Graphs and tables are updated around the middle of every month using current data files from NOAA GHCN v3 (meteorological stations), ERSST v5 (ocean areas), and SCAR (Antarctic stations), combined as described in our December 2010 publication (Hansen et al. 2010).

Note that NASA is using NOAA temperature sensor data, just with a somewhat difference analysis of that data.

I provide this information here for convenient reference, completeness, and to indicate that I am aware of it.

HadCRUT4 global temperature dataset

The IPCC uses the HadCRUT4 temperature dataset (what I call a model) extensively. Sometimes they use the NOAA and NASA datasets as well.

As per Wikipedia:

HadCRUT is the dataset of monthly instrumental temperature records formed by combining the sea surface temperature records compiled by the Hadley Centre of the UK Met Office and the land surface air temperature records compiled by the Climatic Research Unit (CRU) of the University of East Anglia.

The data is provided on a grid of boxes covering the globe, with values provided for only those boxes containing temperature observations in a particular month and year. Interpolation is not applied to infill missing values. The first version of HadCRUT initially spanned the period 1881–1993, and this was later extended to begin in 1850 and to be regularly updated to the current year/month in near real-time.

The official sites for this dataset:

I personally haven’t dived deep into this dataset at all, so I cannot speak with any authority about it. That said, my lack of knowledge prevents me have being able to have any great confidence in its ability to accurately reflect the global temperature. At least a fair number of my concerns with the NOAA and NASA temperature datasets (models) are probably valid for HadCRUT as well.

Separate global land and ocean temperature datasets

Land and ocean temperatures are two different concepts, are measured differently, and have two different datasets (what I call models), even if the two datasets are ultimately combined into a single dataset (model) for global temperature across both land and ocean.

Both NOAA and NASA use the same underlying load and ocean temperature datasets:

  1. GHCN for land temperature. Global Historical Climatology Network Monthly (GHCN-M) database.
  2. ERSST for ocean (sea) temperature. Extended Reconstructed Sea Surface Temperature (ERSST) dataset.

As per NOAA:

We combine a global sea surface (water) temperature (SST) dataset with a global land surface air temperature dataset into this merged dataset of both the Earth’s land and ocean surface temperatures, currently as version v4.0.1. The SST dataset is the Extended Reconstructed Sea Surface Temperature (ERSST) version 4.0. The land surface air temperature dataset is similar to ERSST but uses data from the Global Historical Climatology Network Monthly (GHCN-M) database, version 3.3.0.

That’s the principle. The specific versions of the datasets are updated on occasion. Both NOAA and NASA are now on ERSST 5.0 since that text was written by NOAA.

I provide this information here for convenient reference, completeness, and to indicate that I am aware of it.

I do have a serious concern about the wisdom and mechanics of combining two such disparate datasets. It’s not the same as combining datasets for two distant continents; weather and climate over land and water are very different phenomena.

Is it a dataset or a model?

Data is data, whether its source was a direct measurement, a simple conversion, a simple calculation, or a complex modeling or analysis process. So, technically, even the output of a complex model is still simply a dataset.

That said, I try to distinguish raw data or actual measurements from the output of a modeling or analysis process.

I accept that the output of any global temperature modeling (analysis) process is still going to be called simply a dataset, but I will continue to refer to the model or modeling process (even if it is referred to as analysis) which produces such a dataset.

There is never any guarantee that modeling or analysis will necessarily be consistent with the actual physical world. That’s why science has the concept of empirical validation, to confirm that the output of a model is consistent with the real physical world.

Temperature simulation model vs. temperature model

Sometimes people use the term model to refer to a simulation of future behavior of a system.

Technically, the term model can be used for either analysis of present or past data as well as for prediction of future data.

Technically, I would refer to the latter as a temperature simulation model, and the former analysis as simply a temperature model.

I recognize and accept that not all organizations and scientists may use these same terms in this same way, but this is how I use them in this paper.

Global temperature datasets and models

The climate science community refers to their calculations of global temperature data as a dataset or analysis, while I refer to it as a model.

I generally reserve the term dataset for actual sensor data, raw data. So, for the many temperature sensors around the globe, we have thousands of independent datasets or data series, which then must be combined using some sort of analysis, which I refer to as a model.

The process by which NOAA or NASA or any other scientific organization combines and merges those many actual data series, including and especially adjustments, interpolations, extrapolations, and other forms of analysis, is what I call a model. Not to be confused with prediction models or simulation models used to forecast or predict temperature years and decades in the future, but models nonetheless.

Not to be critical of the scientists per se, but combining land and ocean temperature data is a non-trivial process. They should in fact be applauded for their efforts, but nonetheless, that level of processing of the raw data as well as some number of assumptions about how to combine the data really does warrant my purpose here in referring to the result as a model rather than simply a dataset.

And the simple fact that not all of the individual temperature sensors use the same measurement technology, nor have they all been actively reporting data for the full duration of any portion of the reporting history, seems to also warrant my reference to the process as a model rather than asserting that it is actual measured data.

Technically, even the output of a modeling process can and should be called a dataset or data series as well, but I personally think it is important to call attention to the fact that the resulting modeled temperature data is not actual, raw, measured temperature data directly from sensors.

Put more simply, there is no physical location on the surface where you can go and literally measure the temperature of the planet that the models claim to be producing.

No empirical validation possible for global temperature

Even if you have great confidence in the sensor data for temperature and really believe in the modeling process used to derive that single number for global temperature of the entire planet, the essential problem is that it simply isn’t possible to empirically validate that number.

There is no place you can go and make a single, simple measurement to compare against that modeled number.

There is no satellite that can make such a measurement.

Not even the DSCOVR satellite parked a million miles out at the L1 Lagrange point (originally called Triana, based on Al Gore’s 1998 vision of a 24/7 video eye in the sky) can make such a measurement.

And without empirical validation, the theory cannot be validated.

And if the theory cannot be validated, it can never be settled science.

Dubious model of Earth temperature

I have some more general, abstract concerns with how one would go about modeling the temperature of the Earth:

  1. Half the planet is is sun, half in darkness, always, at the same time. What’s the model of total planet surface temperature in that situation?
  2. How to model temperature across seasons?
  3. How to model temperature as planetary precession progresses?
  4. How to adjust or model satellite-based measurements based on orbits that are not always the same precise altitude or pass over the same precise points at the same precise times of day?
  5. How to model temperature when measurements from ships are not in the same place on successive measurements or over extended periods of time?
  6. How to model temperature when measurements from drifting buoys are not in the same place on successive measurements or over extended periods of time?
  7. How to properly measure sea surface temperature in littoral areas (shallow water) when tides result in significant changes in depth and lateral movement of masses of water?

Maybe scientists have answers to these concerns, but the lack of publicly accessible answers is a big concern in itself. I will be unable to have any significant confidence in the theory of global warming and climate change if my concerns cannot be addressed — to my satisfaction.

Dubious precision of global temperature

My main problem with the data concerning global temperature is the dubious claims about its precision. Given all that we know about the difficulty of measuring just about anything on a global scale, I simply don’t find claims about the precision of global temperature to be even close to being credible.

Do we really know global temperature within 0.1 C? Really?!!

The NOAA data, which I accept as being probably the best we can do, gives the global temperature anomaly to two decimal digits, down to 0.01 C, which to me seems absurdly unreasonable.

In their latest annual report, NOAA reports a global temperature anomaly of 0.94 C for 2016, with a margin of error of 0.15 C, meaning that the true anomaly might be as low as 0.79 C or as high as 1.09 C.

Curiously, the margin of error for 2015 was only 0.08% for an anomaly of 0.90 C

Was 2016 warmer than 2015? Superficially yes, but technically the scientists are obligated to say that it is uncertain whether 2016 was warmer since the difference falls within the margin of error.

For reference, the anomaly in 2014 was 0.69 C with a margin of error of 0.09 C.

I would have expected that the margin of error would decline from year to year as technology, operations, and methodology improve, as it did from 2014 to 2015, but to see it rise from 2015 to 2016, to almost double, raises concern. To be clear, there was no discussion or explanation offered for this unexpected change in margin of error.

It is worth noting that land and ocean temperature anomalies have distinct margins of error, which get combined to give a joint margin of error for global temperature. So the margins of error for these four latest years are:

  • 2013: Land: 0.19 C, Ocean: 0.03 C, Combined: 0.09 C
  • 2014: Land: 0.20 C, Ocean: 0.04 C, Combined: 0.09 C.
  • 2015: Land: 0.18 C, Ocean: 0.01 C, Combined: 0.08 C
  • 2016: Land: 0.15 C, Ocean: 0.16 C, Combined: 0.15 C

Curious how the precision was greater for the ocean in 2014 and 2015, but better for land in 2016 and so much worse for ocean in 2016.

Even more shocking to me, NOAA is claiming that their margin of error for ocean in 2015 was an amazingly low 0.01 C — just one hundredth of a degree! Reall?!! That does not seem at all credible to me.

And is it really true that their precision for ocean temperature in 2016 was sixteen times worse than in 2015? What exactly happened?

My concerns will need to be addressed for me to have any significant confidence in the global temperature data, without which I cannot have any confidence in the theory of global warming and climate change.

Why was ocean temperature accuracy 0.01 C in 2015 but 0.16 in 2016?

I’m not sure which is the bigger and more significant question or problem:

  1. How was NOAA able to measure (model) global ocean temperature to within 0.01 C in 2015?
  2. Why was NOAA then only able to measure (model) global ocean temperature to sixteen times that margin of error, 0.16 C, in 2016?
  3. What happened to cause and account for the dramatic shift

Somebody has some explaining to do.

And in case anybody thinks that 2015 was too much of a fluke, the margins of error of the ocean temperature anomalies for 2013 and 2014 were 0.03 C and 0.04 C respectively, which are still a multiple of 2015, but still a small fraction of 2016.

Without a great and credible explanation for these questions, I remain unable to marshal any significant confidence in the global temperature data, which is crucial and essential in order to have true confidence in the theory of global warming and climate change.

What is the natural variability of global temperature?

Scientists, activists, and science communicators chat up a mantra that the temperature anomalies are not due to natural variability, but don’t bother specifying natural variability. Like an actual number, as well as the science and math used to derive that number.

Who knows, maybe there is hidden science that derives that number.

What I do know is that there is no publicly accessible science that provides this information, which is all that matters in my book.

How does solar rotation affect incoming solar radiation?

The sun rotates on its own axis in addition to the Earth revolving around the sun, and each with a different angular velocity, so they are not in sync. Whether or how this affects the amount of incoming solar radiation is unclear.

The rate of rotation of the sun varies from around 24 to 38 Earth days, depending on solar latitude. Literally, different latitudes of the sun rotate at different rates. In essence, the surface of the sun is constantly changing.

I would like to hear how climate scientists take this dynamic condition into account.

And this is in addition to any normal variability at any particular latitude of the sun.

And then there are sunspots.

The IPCC talks a little about solar irradiance in section 1.2.2 of Chapter 1 of the Physical Science Basis AR5 assessment report:

Changes in the net incoming solar radiation derive from changes in the Sun’s output of energy or changes in the Earth’s albedo. Reliable measurements of total solar irradiance (TSI) can be made only from space, and the precise record extends back only to 1978. The generally accepted mean value of the TSI is about 1361 W m−2 (Kopp and Lean, 2011; see Chapter 8 for a detailed discussion on the TSI); this is lower than the previous value of 1365 W m−2 used in the earlier assessments. Short-term variations of a few tenths of a percent are common during the approximately 11-year sunspot solar cycle (see Sections 5.2 and 8.4 for further details). Changes in the outgoing LWR can result from changes in the temperature of the Earth’s surface or atmosphere or changes in the emissivity (measure of emission efficiency) of LWR from either the atmosphere or the Earth’s surface. For the atmosphere, these changes in emissivity are due predominantly to changes in cloud cover and cloud properties, in GHGs and in aerosol concentrations. The radiative energy budget of the Earth is almost in balance (Figure 1.1), but ocean heat content and satellite measurements indicate a small positive imbalance (Murphy et al., 2009; Trenberth et al., 2009; Hansen et al., 2011) that is consistent with the rapid changes in the atmospheric composition.

In figure 1.1 of that same chapter, IPCC does acknowledge that there are natural fluctuations in solar output:

Natural fluctuations in solar output (solar cycles) can cause changes in the energy balance (through fluctuations in the amount of incoming SWR [solar shortwave radiation])

Still, it doesn’t feel that the matter has been adequately covered. It is too much of a hand-wave for my taste.

To be clear, there are two separate issues that need to be addressed:

  1. Natural variability of solar output, as in solar cycles.
  2. Variability that results from solar rotation.

It is the latter that I have focused on here.

How accurately can scientists measure the temperature of the Earth?

Seriously, as a simple question, a practical matter, if you could ask scientists how accurately they could measure the temperature of the Earth, how accurately would they say they could do it? Without peeking at the global temperature datasets to see what NOAA, NASA, et al are actually claiming.

  • To 0.1 C (which is essentially the implied claim right now)?
  • To 0.01 C (as they claimed for the ocean in 2015)?
  • To 1 C?
  • To 2 C?
  • To 5 C?

Just off the cuff, I would presume that a precision of 5 C would be a no-brainer. Although, who knows, maybe even that is not technically possible.

2 C seems like a reasonable goal, but… how much do we know about how precise they can actually measure?

And measurement begs the question of what is actually measured as opposed to modelled.

The main problems I have with global temperature modeling:

  1. How many measurements go into the model?
  2. How much coverage do those measurements provide?
  3. How precise (and accurate) is any interpolation between measurements?
  4. How precise (and accurate) are actual measurements?
  5. How are measurements actually combined in the model?
  6. How does precision evolve as measurements are combined?
  7. How incomplete data series are dealt with — older series without recent data and newer series without older data?
  8. How to cope with different precisions for different data series and different measurement technologies?
  9. How to cope with differences between land and ocean?
  10. How to cope with calibration of measurement instruments?
  11. How to cope with night vs. day — half the Earth is in sun, half not, so how is temperature for the whole planet even defined?

Personally, I am not convinced that even the best scientists with the best technology can calculate the global temperature to 1 C precision, let alone 0.1 C or 0.01 C.

The ultimate problem here is that if they can’t measure, calculate, or model temperature to within 2 C, then all of the reported anomalies would be within that margin of error.

Even for a 1 C margin of error, all of the reported temperature anomalies would be within the margin of error.

So, as far as I am concerned, the jury is still out as to whether reported temperature anomalies are significantly greater than the margin of error for the modeling regime that is being used by even the best climate scientists.

Is global temperature based on seawater at the surface or air above the water?

Scientists refer to SST or Sea Surface Temperature when discussing how global temperature is measured and modeled, but I haven’t been able to find any clarification whether they are measuring water at the surface or air right at the surface.

In the case of land, I presume they are measuring air temperature above the surface.

In both cases, I would like to know what height above the surface or below the surface for seawater they use. And how standardized, precise, or variable that height is.

For for ship-based temperature measurements, exactly where the temperature is measured relative to the sea surface.

I haven’t found publicly-accessible discussion and details on this matter. My concern is that such details could affect the results, especially if there is variability or inconsistency over time. These issues affect confidence in the science.

Change in global temperature modeling methodology in 2015

I am especially sensitive to changes in methodology, either strategic or tactical.

In May 2015 NOAA had a major change in methodology, which seemed motivated to hide the claims of a hiatus of global warming from 1998 through 2012. NOAA was never completely forthcoming about the motivation for their changes. They described the details of the changes, but not the motivation.

In their Global Climate Report for May 2015, NOAA notes:

Note: With this report and data release, the National Centers for Environmental Information is transitioning to improved versions of its global land (GHCN-M version 3.3.0) and ocean (ERSST version 4.0.0) datasets. Please note that anomalies and ranks reflect the historical record according to these updated versions. Historical months and years may differ from what was reported in previous reports. For more, please visit the associated FAQ and supplemental information.

Wow, read this line again:

Historical months and years may differ from what was reported in previous reports.

Their updates and improvements are actually changing the entire historic data record!

With these changes they completely rewrite the historic data. Can they do that? Is that legal? Is that sane? Is that good science?

Technically, they didn’t change the true data, the temperature sensor readings, at all, just the output data of their global temperature models.

Temperature data exists at two levels:

  1. The actual temperature sensor readings from individual weather stations. To me, this is the real or true data.
  2. The results or output data produced from the global temperature models that take the real, weather station data, and then massage and otherwise adjust the data and then combine all of that adjusted data using algorithms to piece and fit and interpolate and extrapolate this data to combine it all into a seemingly uniform and consistent model of reality. The final results that they report.

To be clear, they didn’t change the sensor data, but they did change the model results.

For more technical detail:

Effective with this (May 2015) monthly climate report, NCEI transitions to updated versions of its land and ocean surface temperature datasets. When combined, these merged datasets are now known as NOAAGlobalTemp (formerly MLOST). This page provides answers to some questions regarding the updated data.

Extended Reconstructed Sea Surface Temperature Version 4 (ERSST v4) dataset

As previously noted, NOAA global temperature modeling switched to using data from the Extended Reconstructed Sea Surface Temperature Version 4 (ERSST v4) dataset in 2015. I consider ERSST to be a model rather than pure measurement of temperature.

Some interesting tidbits:

This new version contains several enhancements over the previous version (version 3b), which has been in operation since 2008. Greater coverage in high-latitude ice-free oceans, updated sea ice data, and improved ship bias corrections are among the many enhancements the new dataset provides.

One of the most significant improvements involves corrections to account for the rapid increase in the number of ocean buoys in the mid-1970s. Prior to that, ships took most sea surface temperature observations. Several studies have examined the differences between buoy- and ship-based data, noting that buoy measurements are systematically cooler than ship measurements of sea surface temperature. This is particularly important because both observing systems now sample much of the sea surface, and surface-drifting and moored buoys have increased the overall global coverage of observations by up to 15%. In ERSST v4, a new correction accounts for ship-buoy differences thereby compensating for the cool bias to make them compatible with historical ship observations.

The new version of ERSST shows the same trend in sea surface temperature as previous versions — an increase in the global average of 0.005°C per decade since 1880. However, the changes in the new version, including the adjustments to account for more buoys, did result in a higher trend in global ocean temperature since 2000. In ERSST v4, the rate of increase in global ocean temperature is 0.099°C per decade, while it was 0.036°C per decade with the previous version.

That’s quite significant:

the changes in the new version, including the adjustments to account for more buoys, did result in a higher trend in global ocean temperature since 2000.

In other words, by changing the model they changed the historic record.

I’m not exactly comfortable with that. It does not improve my confidence in the manipulated temperature record.

Extended Reconstructed Sea Surface Temperature (ERSSTv5) dataset

In July 2017 NOAA introduced a further batch of changes to their Extended Reconstructed Sea Surface Temperature (ERSSTv5) dataset. As before, I consider ERSST to be a model rather than pure measurement of temperature.

It “incorporates more comprehensive ocean surface temperature data collected since the last update in 2015.” That previous update was ERSST v4.

It adds “A decade of near-surface sea temperature data from Argo floats.” So it adds new temperature readings going forward and includes the past ten years of those readings.”

It also incorporates “The latest international comprehensive ocean–atmosphere dataset (ICOADS R3.0).”

It’s great that they are updating their science, but an update barely two years after the previous update doesn’t instill great confidence in the quality of the data for me. So much for the science being settled. How long until the next update? And how many updates after that before the modeling methodology settles down? In short, they aren’t instilling great confidence in me. I really am anxious to see true settled science, but we’re not there yet.

To be clear, this is the basic data that is needed to give the annual (and monthly) global temperature data.

How much confidence can we have in temperature data and methodology that is changing so frequently?

International Comprehensive Ocean-Atmosphere Data Set (ICOADS)

As per NOAA:

The ICOADS is the world’s most extensive surface marine meteorological data collection. Building on extensive national and international partnerships, ICOADS provides thousands of users with easy access to many different data sources in a consistent format. Data sources range from early non-instrumental ship observations to more recent measurements from automatic measurement systems including moored buoys and surface drifters. Each new release of ICOADS has enriched spatial and temporal coverage and data and metadata quality has been improved.

ICOADS meets the needs of different types of users by providing both individual observations and also monthly gridded summary data. Monthly updates make sure the dataset is kept up to date.

Their first observations date from 1662 (yeah, the 17th century.)

They have 455 million observations in over ten terabytes of data.

Yeah, there’s a lot of data there, but I am not persuaded by its completeness and consistency over extended periods of time.

I am confident that it may be the best scientists can do, but that doesn’t give me great confidence.

Global weather stations used for global temperature modeling

Global temperature is modeled by combining temperature data from thousands of weather stations around the globe.

For information on NASA GISS GISTEMP weather stations:

NOAA GISS GISTEMP uses the NOAA Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset:

As per NOAA:

The Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset was first developed in the early 1990s (Vose et al. 1992). A second version was released in 1997 following extensive efforts to increase the number of stations and length of the data record (Peterson and Vose, 1997). Methods for removing inhomogeneities from the data record associated with non-climatic influences such as changes in instrumentation, station environment, and observing practices that occur over time were also included in the version 2 release (Peterson and Easterling, 1994; Easterling and Peterson 1995). Since that time efforts have focused on continued improvements in dataset development methods including new quality control processes and advanced techniques for removing data inhomogeneities (Menne and Williams, 2009). Effective May 2, 2011, the Global Historical Climatology Network-Monthly (GHCN-M) version 3 dataset of monthly mean temperature has replaced GHCN-M version 2 as the dataset for operational climate monitoring activities. The dataset is available at ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/.

I haven’t dived deeply into the GHCN-M temperature data yet. I can’t speak with any authority about this data yet, but by the same token my lack of knowledge means that I am unable to claim great confidence in it either.

I simply haven’t seen enough public discussion of this data to form any firm views about it. In truth, I haven’t seen any public discussion of it. I found it on my own while reading through the GISS web site.

I include this information here for convenient reference, completeness, and to indicate that I am aware of it.

Do we have enough weather stations to accurately model global temperature?

According to NASA GISS (or my reading of their charts, I should say), as of 2016, about 20% of the area of the northern hemisphere is not within 1200 km (746 miles) of a weather station, and about 27% of the area of the southern hemisphere is not within that same range of a weather station. Is that a lot? Who’s to say, but it does raise some concern on my part.

NASA GISS uses 5,983 weather stations out of 7,364 total stations. Only about 2,200 of those stations were currently reporting in 2016.

One of the GISS charts highlights an interesting issue — not all of the stations have been around for the same period of time. In fact, many of them are fairly new. And, many of the older stations are no longer in service. That accounts for the low number of only 2,200 of them reporting out of 7,364 total stations. To me, what this really means is that the older data is not necessarily directly comparable to the newer data. Again, the scientists may have a hand-waving argument to dismiss that objection, but I remain unpersuaded based on the information that is publicly available to me at this time.

23% of the planet (20% northern, 27% southern) is not within 745 miles of a weather station. Each such area represents 1,500 x 1,500 = 1.25 million square miles. That seems like a lot of area with light or even nonexistent coverage.

The surface of Earth is 196.9 million square miles, so 23% of that uncovered area is 45.29 million square miles, a square area of 6,730 miles on a side. Maybe the scientists have a good hand-waving argument for why this is not a problem, but I am not persuaded from what I have heard so far.

NOAA has about 1,350 NOAA data buoys globally currently to cover the entire ocean of the Earth. They have limited measurements for polar regions. I am concerned that NOAA has insufficient sensors to measure temperature over the total ocean surface.

Given a total global ocean area of 361,900,000 square kilometers (139,730,371.177 square miles), those 1,352 NOAA data buoys each cover on average an area of 103,351 square miles or a square area 321 miles on a side, with some areas having very dense coverage but other areas having extremely sparse coverage. That’s an area that would cover Washington, Buffalo, Pittsburgh, and Cleveland — imagine that, using a single temperature reading for all four of those cities and all the area between them. Incredible. Or I should say not credible. Again, maybe the scientists have a good hand-waving argument that their sensors are sufficient, but I remain skeptical, based on the information that is publicly available to me.

NOAA ocean area and volume data:

NOAA data buoy Center:

Note: The number of data buoys is constantly changing, so the number given here is really only a close approximation. My apologies for using different numbers in various places in this paper, due mainly to the fact that different sections were written or edited on different days. I’ve seen numbers between 1,350 and 1,356.

NOAA has an interesting FAQ for Monitoring Global and U.S. Temperatures:

All told, I’m sure the scientists are doing as good a job as they can with the technology and resources available to them, but for me, I still have significant concerns, as outlined above.

All of my concerns would have to be fully and solidly addressed to my satisfaction before I could have any significant level of confidence in this critical data. In all honesty, I don’t hold out any hope that the scientists have any chance of fully addressing my concerns. They’ve simply bitten off way too much more than they can chew and are maybe too proud or too embarrassed to admit it.

Risks of extrapolation from small samples

My big concern is that NOAA has insufficient instrument measurements to extrapolate or model the temperature for the entire planet.

NOAA has 1,354 data buoys deployed, so each buoy provides a single temperature measurement for over 100,000 square miles. That seems like a very risky extrapolation to me.

How many ships provide sea surface temperatures?

NOAA does supplement that hard data from data buoys with data from ships travelling the oceans, but I haven’t been able to find hard data on how many ships and for how long. I haven’t seen any data reported publicly. Hundreds of ships? Thousands? I need to see some data.

How many ships are travelling the oceans at any given moment? Ah, here’s a ship traffic map from MarineTraffic.com:

At the moment I write this the map says there are 160,021 vessels on the map (155,615 when I originally wrote this paragraph.) That includes fishing boats, which I don’t imagine are included by NOAA. What I don’t know is what fraction of these vessels are monitored by NOAA in terms of temperature data.

How accurate are ship sea surface temperatures?

Are ship-based sea surface temperature (SST) readings as accurate as NOAA data buoys, less accurate, or maybe more accurate? There is no publicly available information that answers this question, but an answer is needed.

I need to know this kind of information to have a sense of confidence that the modeling of global temperature is reasonably accurate. Without confidence in temperature data, I can’t have confidence in the overall theory of global warming and climate change.

Lack of location consistency of ship temperature sensors

Even if a temperature sensor on a ship is very accurate, the mere fact that ships are generally moving continuously precludes the possibility of having a consistent data series of temperature measurements for any given location by a given sensor.

I’m more than a little skeptical of the value for such an inconsistent measure of temperature.

If such a moving measurement is incorporated into the global temperature model, what impact will it have on the consistency of the overall global temperature model over multiple or extended periods of time?

The scientists will need to provide greater clarity on this matter and address my concerns fully and to my satisfaction before I could have any significant confidence in the global temperature models or in the theory of global warming and climate change.

Technology of ship temperature sensors

What temperature sensor instrument technology or technologies are being used for ship-based temperature readings?

How similar or dissimilar are the instruments to NOAA data buoys?

What is their accuracy? Again, compared to NOAA data buoys.

NOAA does not provide any publicly accessible information on this matter that I could find.

I did find this reference:

The most important bias globally was the modification in measured sea surface temperatures associated with the change from ships throwing a bucket over the side, bringing some ocean water on deck, and putting a thermometer in it, to reading the thermometer in the engine coolant water intake.

Okay, they provide a rough statement of the methodology, but it is severely lacking in specificity, particularly a comparison to data buoys and a specification of accuracy.

I’d like conformation whether that is indeed the standard and common method of capturing ship-based sea surface temperature (SST.)

What technology is used by these engine coolant water intake thermometers.

Specifically, what is its accuracy.

And how often are temperature measurements taken?

And how are their locations tracked and used when integrating these temperature readings into the global temperature models?

And how, where, and when are measurements combined over each day to get the mean sea temperature for any given location?

Oops… I forgot that in the preceding section I pointed out the issue that ships are generally moving, so that you don’t have any consistency of temperature readings at a given location no matter what you do. Although a ship in port or moored or anchored would at least have some consistency of location, although not for the years and decades that we get with data buoys.

The scientists will need to provide greater clarity on this matter and address my concerns before I could have any significant confidence in the global temperature models or for the theory of global warming and climate change which depends on those models.

Incidentally, I would have expected the National Institute of Standards and Technology (NIST) to have a lead role in standards for temperature sensors, but I’ve seen nothing in the NOAA discussions of climate and temperature that reference NIST and standards for temperature measurement and temperature sensor technology. NIST is of course relevant to the U.S. I would expect an equivalent for Europe and the international area as well.

The IPCC assessment reports should reference national and international standards for temperature measurements and measurement technology as well. But they don’t.

Accuracy, precision, margin of error, and calibration

It is so easy to confuse or conflate accuracy, precision, and margin of error. In fact, I do it too frequently myself. I’ve probably done it a few places in this paper.

Part of the problem is that the strict, technically correct definitions are themselves rather confusing, as a quick check of the Wikipedia will reveal:

Even the dictionary doesn’t help, treating accuracy and precision as synonyms and defining each in terms of the other.

Rather than quote any formal definitions, I’ll instead offer my own simplified, but reasonably accurate definitions:

  • Accuracy is how close a measurement, calculation, or estimate is to the true value of the quantity being measured, calculated, estimated, or modeled.
  • Precision is how fine a granularity a measurement or estimate is, typically characterized as a number of significant digits or a number of decimal digits.

There is a third concept, margin of error, which is a claim about accuracy, how close a value is believed to be to the true value. Whether the claimed value is indeed that close to the true value is unknown and debatable.

Generally, a margin of error will always be a stated margin of error, in that we cannot know what the margin of error is without some explicit, quantified statement.

Technically, if no margin of error is stated explicitly, the reader is free to assume that the margin of error is the same as the precision. Or more technically correct, one half of the precision.

For example, NOAA reports global temperature anomalies to two decimal digits which means to the hundredth of a degree. That’s the precision, but says nothing about the accuracy. NOAA uses this precision consistently for all of their temperature data for each reporting period, across reporting periods, and for very long periods of time.

NOAA doesn’t state the precision explicitly, but by always reporting temperature values with two decimal digits, they are implicitly giving the precision.

NOAA doesn’t always make statements about accuracy. I find that disturbing and, well, downright unprofessional. They should know better. I personally don’t accept that their numbers are really accurate within two decimal digits, unless they explicitly state that.

In some specific contexts, NOAA does indeed explicitly state the margin of error. For example, the global temperature anomaly for 2016 was 0.94 C with a margin of error of 0.15 C, which means they are claiming that the actual temperature would be as much as 0.15 C above or below the reported temperature anomaly.

But be clear, margin of error is not true accuracy, rather it is claimed accuracy.

To get the true accuracy NOAA would need to perform empirical validation, to go somewhere and actually measure the global temperature and compare it to their estimate. But, they have no ability to do such empirical validation, so all they can do is give a claimed accuracy or margin of error.

To be clear, nobody, including NOAA, NASA, and IPCC has any actual knowledge of the true accuracy of the global temperatures that are calculated as part of the global temperature modeling process. And they never can without actual empirical validation.

Obtaining accurate measurements requires calibration of measuring instruments, to adjust the measured value so that it is reasonably close to the actual value for the quantity being measured.

To be clear, calibration requires empirical validation, so if you cannot perform empirical validation, then you can’t calibrate instruments.

I’ve used two pseudo-terms of my own construction, in an effort to speak as clearly as possible about accuracy:

  1. Claimed accuracy is how close a measurement, calculation, or estimate is believed to be. This is the stated margin of error. This is an exact synonym for margin of error.
  2. True accuracy is how close the actual, real value of a quantity is to a measurement, calculation, or estimate that purports to represent the true value of that quantity.

I could also suggest another pair of pseudo-terms in the name of accuracy:

  1. Claimed margin of error is the stated margin of error. Generally a synonym of margin of error.
  2. True margin of error is the margin of error relative to the actual value of the quantity being measured, calculated, estimated, or modeled.

The point of the pseudo-term true margin of error is that calibration must first be performed, then actual measurements will be accurate within the claimed margin of error.

If you haven’t performed calibration, your stated margin of error is merely a claimed margin of error rather than a true margin of error.

Given all of these pseudo terms, I would phrase my goal as to achieve a true margin of error.

I’m not looking for perfection and incredible precision, but I do want to ensure that global temperature data really is accurate to a reasonable margin or error.

In other words, I want the combination of accuracy and a reasonably small margin of error.

In this paper I generally and loosely use accuracy to refer to true accuracy. NOAA, NASA, and IPCC do sincerely believe that their numbers are accurate to their stated margin of error (in the few cases where they actually do state a margin of error), but I personally am not able to accept their claimed accuracy as necessarily being the true accuracy.

How many data points are needed to measure global temperature to a given precision?

Oops… I just spent a whole section explaining how accuracy and precision are different, but already I make the slip-up myself! I should have worded the title of this section as “… given accuracy” rather than “… given precision”, but I kept it as is to simply reinforce the point of the previous section.

To be clear, the question here is:

  • How many data points are needed to measure global temperature to a given accuracy?

And following the convention I stated in the preceding section, this implies:

  • How many data points are needed to measure global temperature to a given true accuracy?

And is not the same as:

  • How many data points are needed to measure global temperature to a given claimed accuracy?

I could also have phrased it as:

  • How many data points are needed to measure global temperature to a given true margin of error?

It would have been useless for me to have phrased it as:

  • How many data points are needed to measure global temperature to a given margin of error?

Since that is true regardless of the actual accuracy, and the whole point of this section is to get at actual accuracy rather than a mere assertion of accuracy.

Now, back to the actual topic of this section…

Working backwards rather than from how many temperature measurements we actually may have, I would like to see some solid math showing how to calculate how many surface temperature measurements would be needed so that the temperature models could produce global temperatures that were accurate to true margins of error of 5 C, 2 C, 1C, 0.2 C, 0.1 C, 0.5C, 0.05 C, 0.01 C, and 0.005 C.

And by accurate I mean within the true margin of error, so that the actual temperature value really does fall within the stated (claimed) margin of error.

How many data buoys are sufficient to achieve accuracy to the stated true margins of error? Is 1,350 enough? I seriously doubt it, but I want to see the math that calculates the correct number of data buoys for each true margin of error. Is it 2,000, 5,000, 10,000, or 20,000 data buoys or even more — for each of those true margins of error.

Ditto for other forms of measurement, whether they be land stations, ships, or satellite measurements.

Then, we could compare that computed requirement for number of measurements needed with the actual number of measurements fed into the temperature modeling algorithms to see what accuracy we really can achieve.

In addition, I would like to see some solid math for calculating the precision (oops… I mean accuracy and margin of error) that any model can achieve based on total area and total number of input measurements.

The goal here is that the precision should be no finer than the the margin of error, and accuracy demands that the actual value fall within the claimed margin of error.

I haven’t seen a discussion of these types of calculations so far. Without them I will be unable to marshal any significant level of confidence in the published estimates of global temperature, and without confidence in those numbers I will remain unable to marshal any significant confidence in the overall theory of global warming and climate change. After all, it really is all about the data.

What time of day is temperature measured?

How many temperature measurements are made each day in the process of obtaining the temperature of the day that is recorded in the temperature datasets?

How many measurements would be needed to assure that the minimum, maximum, and average temperatures are accurately being measured?

Do all temperature sensors take the same number of measurements each day, and at the same time?

And has the same methodology been used for all time periods? Especially:

  1. Pre-1958
  2. 1958–2000
  3. 2000 — today

I presume that older data was measured with a somewhat more primitive methodology than with current real-time technology, but I’d like to see clarification as to exactly what methodology was in fact used during all time periods and for all sensor technologies.

And what about ship sensors and satellite remote sensing? What compatibility and consistency is present there?

Satellites aren’t sensing a given point on the globe in real-time for all times of the day, so how many readings do they have for a given location per day, and how likely are they to hit or miss the high and low for the day, or the mean?

Ships are usually or frequently constantly moving so they are unlikely to get more than a single reading in an entire day for a given location.

In other words, what kind of consistency or inconsistency is there?

And how do variations in consistency get reflected in the margin of error for the temperature models and resulting datasets?

The IPCC defines the diurnal temperature range:

Diurnal temperature range — The difference between the maximum

and minimum temperature during a 24-hour period.

I’ve encountered a reference to mean or diurnal range of surface temperature in the IPCC assessment reports, but no detail on precisely what that means.

I could understand averaging the minimum and maximum, but even that might not be a good substitute for taking periodic measurements and averaging them. Some clarity is needed. What process or methodology is NOAA actually following.

The Wikipedia defines diurnal temperature variation:

In meteorology, diurnal temperature variation is the variation between a high temperature and a low temperature that occurs during the same day.

In any case, there are lots of open questions and concerns here that I would need to have answered to my satisfaction before I could have any strong sense of confidence in the accuracy of measurement and modeling of global surface temperature.

How accurate are satellite temperature measurements?

There are a number of areas of concern that I have with satellite measurements of temperature:

  1. Area resolution. How fine a grid or spot can be measured?
  2. Accuracy. Of the actual temperature measurement. Or, actually, the deduced or inferred temperature given that it is really a measure of irradiance.
  3. Margin of error. Given the size of the area.
  4. Coverage throughout the day for each location, to get minimum, maximum, and average or mean.
  5. Is there any possibility of empirical validation between a particular satellite measurement and a surface sensor at precisely the same location?
  6. How do the area resolution, accuracy, and coverage get reflected in the modeling process needed to blend this disparate data in with traditional land and sea temperature data, including the impact on the margin of error for global temperature.
  7. To what extent is satellite temperature data used in the NOAA, NASA, and HadCRUT global temperature analyses? Is it a significant factor or a minor factor? Is it critical or simply an extra benefit?
  8. Could they model the temperature of the entire Earth using only satellite temperature data? If not, why not?
  9. What criteria are used to determine when satellite temperature data is used when modeling global temperature?

The interesting thing about satellite data is that it covers both land and ocean.

Interesting: Some satellite temperature measurements will blend both land and ocean when the satellite passes over coastal regions.

One interesting distinction between satellites and data buoys or ship-based measurements is that the former does measure an average across a significant area, while the latter measures a small number of discrete points in a larger area. The latter is more accurate for a discrete point, but whether the discrete points can be extrapolated and interpolated to any level of accuracy for a larger area is unknown. And the accuracy of the satellite measurement is unknown since there is no empirical validation possible for the full area of each satellite measurement. How you blend these disparate methodologies with their disparate precisions and margins of error is an interesting problem.

In all honestly, I haven’t yet spent the time to dive deeply in this area of satellite temperature measurement, so I can’t be too critical yet, but I can’t give the scientists a free pass either.

In any case, there are lots of open questions and concerns here that I would need to have answered to my satisfaction before I could have any strong sense of confidence in the accuracy of measurement and modeling of global surface temperature.

How much satellite data is included in global temperature?

I haven’t been able to find any hard data on the extent to which satellite temperature data is used by NOAA, NASA, et al when calculating global temperature.

Is it used heavily or only sparsely?

The preceding section details more of my questions in this area.

How long are datasets that use the exact same measurement instruments and methodologies?

Technologies for measuring environmental data have changed dramatically since the 19th century. I am concerned that every time we change the technology it means that we shorten the length of the dataset which was captured using a particular measurement technology. In particular, for all of the data series that go into the current global temperature models, how long are each of those data series, that were captured with the same instruments and methodologies?

Since the 19th century, we now have technologies such as:

  • Electronics
  • Digital
  • Software
  • Remote sensing
  • NOAA data buoys
  • Satellites
  • Deep ocean measurements
  • Continual software updates
  • Multiple generations, revisions, updates, and variations of each of the above

I am concerned about how compatible and consistent all of the different data series are, both spatially and over time.

Sure, adjustments can be made to patch the various data series together, but that has to add some degree of additional error to the overall modeling process, which should be included in the error bar or margin of error for both the intermediate data series and the final result, but is it?

I need to see much more discussion of this issue, but I’m not seeing it. This prevents me from having any significant confidence in the results or any theory that depends on these results.

Dubious stability of temperature data and models

With somewhat frequent updating of the global temperature models and datasets and ongoing updating of the actual temperature sensors and their siting, including ships in motion, I am very concerned about the stability of temperature data, both the datasets from year to year and decade to decade, as well as the output temperatures from the evolving models.

The ERSST and ICOADS temperature datasets (models) are evolving more than a little too frequently to my mind. This seriously restricts my ability to have any significant confidence in the global temperature models, and without stable global temperature modeling I cannot have any significant confidence in the theory of global warming and climate change.

I’m not suggesting that the scientists should artificially attempt to suppress dataset and model methodology updates, but simply that they should accept that their data, methodologies, and models are not as stable as we all should be demanding before we start declaring that the science is settled and beyond dispute.

By all means, let the science evolve. But cease and desist from acting as if it weren’t.

Accuracy of temperature sensors

I would like to see solid data on the accuracy of the temperature measuring technology used for the various temperature sensors used by NOAA for its global temperature network.

Just out of curiosity, here’s the spec for a state of the art digital temperature sensor from Texas Instruments:

TMP468

±0.7⁵⁰C High-Accuracy Multi-channel Remote and Local Temperature Sensor

Features

— 8-Channel Remote Diode Temperature Sensor Accuracy: ±0.75°C (Maximum)

— Local and Remote Diode Accuracy: ±0.75°C (Maximum)

— Local Temperature Sensor Accuracy for the DSBGA Package: ±0.35°C (Maximum)

— Temperature Resolution: 0.0625°C

I find it interesting that 0.75 C is considered high accuracy.

Okay, the local temperature accuracy is 0.35 C, twice as accurate, but it isn’t realistic to expose the circuit board directly to water or air. That’s the point of using remote sensors.

I wonder what technology NOAA data buoys use. Do they have better accuracy than this? Or worse?

Attention all climate scientists I need to know what level of accuracy your sensors actually have. Without this information, I will be unable to have any significant level of confidence in your data, your science, or your theories that depend on global temperature.

The precision (they’re calling it resolution) is 0.0625 C, so they can only do a little better than 0.1 C, while NOAA global temperature data is all reported with a precision of 0.01 C.

In 2015, NOAA claimed that the margin of error for global ocean temperature was 0.01 C, which is significantly more precise than even this state of the art temperature sensor is capable of.

Seriously, I really would like to know what temperature sensor technology NOAA is using!

Needless to say, my questions and concerns will need to be fully addressed to my satisfaction before I can have any significant confidence in the modeling of global temperature, which prevents me from having any significant confidence in the theory of global warming and climate change.

Calibration of instruments

Calibration of instruments has always been a big issue for scientists.

It is problematic even with a single instrument.

And it is very tedious with a handful of instruments.

Hundreds of instruments? That has to be a calibration nightmare.

Thousands of instruments? Is it humanly possible?

Many thousands of instruments? How can anyone have any confidence in the ongoing calibration of so many instruments? My concern is that they simply aren’t doing it.

I would like to see some solid data on calibration of the sensors used to measure temperature.

By the way, even with the best, most modern digital electronic technology, the analog components that actually measure temperature and other quantities are subject to variations in manufacturing processes, aging, impact from environmental agents, and other natural processes that require some variable amount of calibration over time.

Even with reasonable calibration, I would like to see solid data on the error bars for all of the instrument technologies that are either currently in use or have been used to construct the official global temperature record back to 1980, 1970, 1958, or back to whatever year the scientists are claiming they have accurate temperature records.

Calibration matters!

Unless my concerns are adequately addressed — to my satisfaction — my level of confidence in global temperature modeling in particular and the theory of global warming and climate change in general will not be very high at all.

Sample NOAA data buoy data

The NOAA Data Buoy Center lets you check up on the latest data from individual data buoys.

As I write this, here’s the data from a data buoy out in the middle of the Pacific Ocean:

It reports both air and water temperature, which seem to differ by about half a degree.

I need some confirmation whether the water or air temperature is used in the ERSST dataset. I might presume that SST (sea surface temperature) implies water temperature, but who knows without solid confirmation.

The precision is implied as a tenth of a degree (0.1 F.) I’m curious whether the raw data has a higher precision and whether it is measured in C or F. The data measurement web page indicates that temperature is C (celsius):

In theory, you can also figure out where the water and air sensors are relative to the water surface.

You can look at temperature plots over time for that particular sensor, on a monthly basis, giving minimum, maximum, mean, and standard deviation for each month.

You can look at a table of the readings for the past 24 hours as well. The temperatures are reported every six minutes. Whether the buoys measures data more frequently than that is unknown to me. I’m curious whether they might miss the minimum and maximum for the day if it occurs in the middle of the six minute period.

I didn’t see any reference to either the particular temperature sensor technology that is used or the margin of error for that technology. There were no error bars on the temperature graphs on margin of error for the tabular data.

This is a good start, but I need to see more information to have any significant confidence in global temperature modeling.

I include this information here for convenient reference, completeness, and to indicate that I am aware of it.

Temperature sensor accuracy over broad temperature range

Temperature sensors have some variability of accuracy over the full range of temperatures they will be exposed to. This variability needs to be examined and reported for each of the various temperature sensor technologies that are deployed in the global temperature sensor network.

I would want to see ranges of accuracy for at least these interesting temperature ranges:

  1. Near freezing.
  2. Well below freezing.
  3. Polar cold.
  4. Minimum surface temperature.
  5. Tropical heat.
  6. Desert heat.
  7. Maximum surface temperature.
  8. Sea surface temperature (SST) before, during, and after seawater freezing.

And then on top of all of that, I need to see how this variability of accuracy (varying margins of error) is mathematically factored into the global temperature modeling process that combines all of the individual sensor temperature data, to produce both a combined global temperature and a margin of error on that combined temperature that reflects the variability of error going into the modeling process from the individual sensors.

I need this level of detail before I can feel comfortable having any significant level of confidence in the measurement and modeling of global temperature, which I need before I can have any significant confidence in the overall theory of global warming and climate change.

Accuracy of global temperature model relative to sensor accuracy

Note: I’m using the term accuracy loosely in this section. I should be using margin of error to be technically correct. But the general intent is still the accuracy of global temperature measurements and anomalies.

Global temperature may be reported with a particular accuracy or margin of error, but how does that compare to the accuracy or margin of error of the many thousands of individual temperature sensors which are fed into the global temperature models which calculate that single global temperature?

From what I can tell, the individual sensors are not terribly accurate, so it baffles me how the model can achieve the degree of accuracy that is claimed.

For example, the margin of error for global ocean temperature was reported as +/- 0.01 C in 2015. Really? Wow! If the margin of error for the final global temperature anomaly from the model was so tiny, what would the margin of error of individual temperature sensors have had to be? Something is wrong here, somewhere. A model accuracy of 0.01 C in 2015 is simply not credible.

This is another area where the scientists are going to have to do a much better job if they wish to gain my confidence.

What I am looking for here is the math for how the margin of error for the global temperature model is calculated compared to the margins of error for each of the many individual sensors.

The hiatus of 1998–2012

For those of us carefully monitoring the annual temperature data as it came out back in 2009, 2008 was a real surprise. It was well below the previous two years and below all of the previous seven years. Not enough data for a conclusive trend, but it really stood out.

The temperature bounced bound for 2009 and 2010, but then fell back again almost it its 2008 level in 2011.

There we were a full decade into the 21st century and the global temperature was below the level of 1998. A full thirteen years without setting a new high. How could that be? What was happening? What happened to the warming trend that Al Gore had promised us?

People started chattering feverishly about a hiatus or slowing or pause of the warming trend.

In fact, in the 2013 release of the IPCC Physical Science Basis AR5 assessment report, the scientists of IPCC admitted it in Box 9.2 on page 769 of chapter 9:

Box 9.2 | Climate Models and the Hiatus in Global Mean Surface Warming of the Past 15 Years

The observed global mean surface temperature (GMST) has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years (Section 2.4.3, Figure 2.20, Table 2.7; Figure 9.8; Box 9.2 Figure 1a, c). Depending on the observational data set, the GMST trend over 1998–2012 is estimated to be around one-third to one-half of the trend over 1951–2012 (Section 2.4.3, Table 2.7; Box 9.2 Figure 1a, c). For example, in HadCRUT4 the trend is 0.04ºC per decade over 1998–2012, compared to 0.11ºC per decade over 1951–2012. The reduction in observed GMST trend is most marked in Northern Hemisphere winter (Section 2.4.3; Cohen et al., 2012). Even with this “hiatus” in GMST trend, the decade of the 2000s has been the warmest in the instrumental record of GMST (Section 2.4.3, Figure 2.19). Nevertheless, the occurrence of the hiatus in GMST trend during the past 15 years raises the two related questions of what has caused it and whether climate models are able to reproduce it.

The IPCC report:

To be clear, those are the words of the vaunted IPCC, not some wild-eyed climate denier.

To be clear, there was still an upwards trend in temperature, just that it was a slower upwards trend of 0.04 C rather than 0.11 C, or just 36% of the previous warming trend rate.

Rebuttal of the hiatus of 1998–2012

In June 2015 a collection of climate scientists from NOAA published a paper that essentially rebutted the notion of any hiatus of global warming from 1998 through 2012.

It was not so much an argument against the hiatus conjecture, but an “updated global surface temperature analysis” that makes the hiatus appear to go away, depending on how you look at the data.

In other words, they changed the data. Not the underlying raw weather station and data buoy temperature readings, but they updated the model that is used to perform the analysis that produces the results that we see as the temperature data.

Brief summary of the paper:

Walking back talk of the end of warming

Previous analyses of global temperature trends during the first decade of the 21st century seemed to indicate that warming had stalled. This allowed critics of the idea of global warming to claim that concern about climate change was misplaced. Karl et al. now show that temperatures did not plateau as thought and that the supposed warming “hiatus” is just an artifact of earlier analyses. Warming has continued at a pace similar to that of the last half of the 20th century, and the slowdown was just an illusion.

Full abstract of the paper:

Much study has been devoted to the possible causes of an apparent decrease in the upward trend of global surface temperatures since 1998, a phenomenon that has been dubbed the global warming “hiatus.” Here, we present an updated global surface temperature analysis that reveals that global trends are higher than those reported by the Intergovernmental Panel on Climate Change, especially in recent decades, and that the central estimate for the rate of warming during the first 15 years of the 21st century is at least as great as the last half of the 20th century. These results do not support the notion of a “slowdown” in the increase of global surface temperature.

They open the paper with this introduction:

The Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report concluded that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years [1998–2012] than over the past 30 to 60 years.” The more recent trend was “estimated to be around one-third to one-half of the trend over 1951–2012.” The apparent slowdown was termed a “hiatus” and inspired a suite of physical explanations for its cause, including changes in radiative forcing, deep ocean heat uptake, and atmospheric circulation changes.

But they quickly make their point:

Although these analyses and theories have considerable merit in helping to understand the global climate system, other important aspects of the “hiatus” related to observational biases in global surface temperature data have not received similar attention. In particular, residual data biases in the modern era could well have muted recent warming, and as stated by IPCC, the trend period itself was short and commenced with a strong El Niño in 1998. Given recent improvements in the observed record and additional years of global data (including a record-warm 2014), we reexamine the observational evidence related to a “hiatus” in recent global surface warming.

They note some data issues:

The data used in our long-term global temperature analysis primarily involve surface air temperature observations taken at thousands of weather-observing stations over land, and for coverage across oceans, the data are sea surface temperature (SST) observations taken primarily by thousands of commercial ships and drifting surface buoys. These networks of observations are always undergoing change. Changes of particular importance include (i) an increasing amount of ocean data from buoys, which are slightly different than data from ships; (ii) an increasing amount of ship data from engine intake thermometers, which are slightly different than data from bucket seawater temperatures; and (iii) a large increase in land-station data, which enables better analysis of key regions that may be warming faster or slower than the global average. We address all three of these, none of which were included in our previous analysis used in the IPCC report.

So, a key issue related to the hiatus is issues with data that were not previously available to IPCC or anybody else:

We address all three of these, none of which were included in our previous analysis used in the IPCC report.

I won’t dive any deeper into their rebuttal here. I’ll just link to their actual paper.

Abstract of the paper:

Full text of the paper:

Proper citation of the paper:

Possible artifacts of data biases in the recent global surface warming hiatus

BY THOMAS R. KARL, ANTHONY ARGUEZ, BOYIN HUANG, JAY H. LAWRIMORE, JAMES R. MCMAHON, MATTHEW J. MENNE, THOMAS C. PETERSON, RUSSELL S. VOSE, HUAI-MIN ZHANG

SCIENCE | 26 JUN 2015 : 1469–1472

Updated global surface temperature data do not support the notion of a global warming “hiatus.”

Needless to say, I remain unconvinced. I mean, their re-analysis may be valid, but the simple fact that they felt compelled to rejigger their model (analysis) to fit their perception of what the data shouldn’t look like seems to be a somewhat dubious motive, to me.

To be fair to the scientists, we don’t really know what their motivation really was.

But to be fair to me and the critics and hiatus proponents, we don’t really know what the motivation of the scientists really was.

In short, maybe their re-analysis is absolutely valid, but it’s not possible for me to tell if this is really the final word on the matter.

Maybe a future analysis will bring back the hiatus. Who knows. Stay tuned.

Dubious NOAA methodology changes in response to the Hiatus

Buried in that June 2015 paper that rebutted the 1998–2012 hiatus is the following:

…several studies have examined the differences between buoy- and ship-based data, noting that the ship data are systematically warmer than the buoy data. This is particularly important because much of the sea surface is now sampled by both observing systems, and surface-drifting and moored buoys have increased the overall global coverage by up to 15% (supplementary materials). These changes have resulted in a time-dependent bias in the global SST record, and various corrections have been developed to account for the bias. Recently, a new correction was developed and applied in the Extended Reconstructed Sea Surface Temperature (ERSST) data set version 4, which we used in our analysis. In essence, the bias correction involved calculating the average difference between collocated buoy and ship SSTs. The average difference globally was −0.12°C, a correction that is applied to the buoy SSTs at every grid cell in ERSST version 4. [IPCC used a global analysis from the UK Met Office that found the same average ship-buoy difference globally, although the corrections applied in that analysis were equal to differences observed within each ocean basin.] More generally, buoy data have been proven to be more accurate and reliable than ship data, with better-known instrument characteristics and automated sampling. Therefore, ERSST version 4 also considers this smaller buoy uncertainty in the reconstruction.

Three sentences stand out:

  1. These changes have resulted in a time-dependent bias in the global SST record, and various corrections have been developed to account for the bias.
  2. Recently, a new correction was developed and applied in the Extended Reconstructed Sea Surface Temperature (ERSST) data set version 4, which we used in our analysis.
  3. ERSST version 4 also considers this smaller buoy uncertainty in the reconstruction.

In other words, the scientists are changing their methodology in response to results that they didn’t agree with.

Okay, sure, maybe they eventually would have gradually updated their methodology anyway, but it certainly looks awfully suspicious when the update is coincidentally timed to cope with the hiatus.

Again, maybe these are all innocent, sincere, and technically valid changes, but the timing and focus raises significant alarm on my part.

I can certainly accept the changes, but they don’t raise my confidence in the science of global warming and climate change.

This sure doesn’t seem like settled science.

More methodology changes to come?

I believe firmly in the test of time. The elapse of time is the only way to firmly and solidly validate any scientific theory or methodology.

As things stand right now, the most recent global temperature modeling methodology changes are little more than two years of age, and that is way too short for my taste to have any confidence in them. More time is needed. Much more time. How long? At least a decade? Hard to say. The more, the better.

Who knows how many additional methodology changes will be put in place in the coming years?

Oh, and lo and behold, there was another model methodology change put in place in July 2017.

With at least a few more years between us and the last methodology change, only then can we (okay, I) begin to perceive the global temperature modeling methodology as at least seeming to appear stable.

In short, still no sign of settled science.

Was BRIC development the cause of the hiatus?

I have my own theory (I call it a conjecture) as to what caused the hiatus: development in the BRIC countries, notably India and China, where rapid acceleration of the use of coal power plants and dirty motor vehicles caused a dramatic rise in particulates, which exert a cooling effect, not as great as in the 1940 to 1970 period, but possibly enough to cause the appearance of the hiatus.

Spurred in part by the 2008 Olympics, China in particular began to curb their air pollution, to some degree, which by 2013 may have helped to bring about the end of the hiatus.

Hey, it’s a good story at least. It’s certainly not a scientific explanation, but it works for me while we all patiently wait for real scientists to come up with credible explanations that have a firm scientific basis, a lot firmer than hand-waving. Stay tuned.

Why did the scientists bungle 1998 so badly?

Whether the hiatus of 1998–2012 can be vanquished for good depends very heavily on how the temperature data for 1998 is treated. If 1998 is considered statistically significant, at least some remnant of the hiatus remains. If 1998 is treated as an aberration, a true outlier that should be discarded, then the status of the hiatus is significantly weakened. But which is the proper approach?

The simple truth is that the scientists botched 1998 really badly, in several ways.

  1. In 1999, scientists made a big deal about how 1998 was the warmest year on record, without noting any concerns about any statistical significance difficulties with the year. This cemented the status of 1998 as being statistically significant. Was that judgment on their part wise or imprudent?
  2. It was only after critics started making a big deal about the hiatus that anybody started to question that status of 1998 as being statistically significant.
  3. It was only after the IPCC AR5 Physical Science Basis assessment report in 2013 that NOAA felt any pressure to do something about the perception of a hiatus from 1998 through 2012.
  4. It was only in May 2015 that the scientists at NOAA finally acknowledged that yes, Houston, we do have a problem.
  5. Even now, scientists have not acknowledged that they made a mistake in 1999 by not immediately raising concern about whether 1998 was truly statistically significant.

The simple truth is that if the scientists had not let 1998 stand as significantly significant during the 1999 to 2005 period, the whole notion of a hiatus might not have even gained credibility.

Who knows, maybe they finally have it all figured out now, but this dramatic and prolonged bungling certainly forces me to withhold confidence that they do have a firm, solid, credible handle on what’s really going on with global temperature.

IPCC on the 1995–2000 portion of the hiatus

To be fair to the scientists, the IPCC AR5 assessment report Technical Summary from 2013 did focus some attention on the 1995 to 2000 sub-period within the hiatus, as to what may have been contributing factors to the perception of a pause or hiatus in global warming:

During 1995–2000 the global mean temperature anomaly was quite variable — a significant fraction of this variability was due to the large El Niño in 1997–1998 and the strong back-to-back La Niñas in 1999–2001. The projections associated with these assessment reports do not attempt to capture the actual evolution of these El Niño and La Niña events, but include them as a source of uncertainty due to natural variability as encompassed by, for example, the range given by the individual CMIP3 and CMIP5 simulations and projection (TFE.3, Figure 1).

Fair enough, to some extent.

They characterize events as a “source of uncertainty due to natural variability”, but by focusing attention on uncertainty and natural variability they end up undermining their own ongoing positions that they possess certainty and that natural variability is not a contributing factor in global warming.

What I would continue to fault them for is failing to adequately call out these regional effects much earlier than the 2013 assessment report. If they felt overwhelmed by all the hiatus chatter it is their own fault.

Why did it take until 2013 for them to realize that there were regional weather effects that were excessively influencing the global temperature model results in a way that should not have been considered statistically significant from the perspective of a stable model for global temperature?

The bottom line here is that this episode illustrates and confirms that the science still has a long way to go before it is truly settled science.

Truth about the hiatus?

A few key points to close out this discussion of the hiatus:

  1. It was real, but it’s significance continues to be a matter of debate.
  2. The temperature data of 2014, 2015, and 2016 effectively showed a break out (my own words) from whatever hiatus may have been in place from 1998 through 2012. If the global temperature model results are to be believed, which is a huge question mark in my view.
  3. Is the hiatus gone for good? Maybe, maybe not. Who’s to say which was the true aberration, the 15 years from 1998 through 2012 or the last three years? Seriously, nobody knows the answer to that. We can all speculate, but even the speculation of scientists is not the same as whatever truth the future actually holds.

Me, I’ll patiently await the actual climate data for the next few years. I would say that by 2021 — five years from the 2016 peak — we will know with some confidence whether the record high global temperature of 2016 signaled the death of the hiatus, or if the 2014–2017 period constituted an outlier period in a more extended hiatus.

For what it’s worth, as of the time of this writing, the September NOAA State of the climate report shows that 2017 was the second warmest January to September period, only very slightly warmer than the same period in 2015 (0.87 C vs. 0.86 C, an insignificant 0.01 C difference given a margin of error in September 2017 of 0.17 C and 0.10 C in September 2015), so we don’t have 2016 as a clear trend indicator for intensified warming.

If the next eleven years show a clear trend that is well above the trend from 1998 through 2012, then I’ll agree that the hiatus was probably an aberration.

The real bottom line is that the hiatus caught scientists by surprise, they didn’t expect it, and then their chaotic response was too lame and too late. Black eyes all around for them. They will need the next decade to reclaim any dignity and credibility that they once had.

I’m not going to try to anticipate what the next five to fifteen years will bring, but I will strongly resist anybody (or any scientist) who insists that they know what that period will bring.

The simple fact is that the data (albeit modeled data) is far too noisy for the claims of certainty that are being claimed. At least that’s what I am seeing.

Continued in Part 4 of 4

Continue to Part 4 of 4 and Conclusion.

Written by

Freelance Consultant

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store