The Temperature Record

-Are the temperature records used to construct climate data reliable?

A while back, I became aware that some climate change sceptics were claiming that in reality, there has been no global warming at all. On the face of it this seemed like a more radical claim than the more usual one that although the planet had warmed, human activities were not the cause of that warming. Indeed it seemed rather improbable in the face of a multiplicity of datasets from land stations, ocean buoys and satellites which all indicated that temperatures had indeed increased during the 20th Century. Surely, these could not all be wrong?

https://realclimatescience.com/2018/03/noaa-data-tampering-approaching-2-5-degrees/

I was therefore inclined to be dismissive of this claim, and investigated it no further.

Recently the claim has been revisited in a number of science blogs, and this got me around to thinking that I'd like to find out for myself. After all, the proof of science is in seeing the effect replicated, and doing so firsthand to see the actual results is worth a hundred thousand anecdotal reports.

The core element of the claim rests in the nature of adjustments to temperature data gathered from weather stations. The USA data is typically referred-to, although some studies cover other regions.

The layman's perspective on this, is that temperature data consists of a guy in an SUV (or on a bicycle if you prefer) going out to a white box sitting in some open space, noting the reading of a thermometer inside, and reporting that to a climate science department, who then in create the graphs we all see in the news. Surely, there isn't much that could go wrong in that process, we ask ourselves.  Reading a thermometer hardly calls for a PhD in climatology, so how come there is any question about the accuracy of the readings, anyway?

Well, not so fast. Finding out the temperature at any given place for any given moment in time is a no-brainer, but relating that to temperatures at other times or places in a way that gives a meaningful comparison, is fraught with difficulties.

Firstly, temperature varies throughout the day and night, so the time at which the reading is taken has a profound effect. Then again, cloud cover or other weather effects can exert an even more rapidly changing effect. Ideally as the climate researcher we'd like to have a minute-by-minute account for all locations, but that would be a colossal amount of data, and too hard to manage.

In practice, the weather stations submit their data in the form of monthly averages. So, already we're seeing that we don't just simply have thermometer readings, we have to rely on someone else doing some number-crunching correctly. For the moment let's assume the averages are correctly calculated, though.

-How about a spot of music whilst you read?

This still leaves us with a couple of problems. Although the monthly averages have eliminated day-to-night variations, in the climate league table all months are not equal. For the northern hemisphere, July and August are bound to be hotter than December or January. Many stations don't have a complete historical record, some months being missing. The issue here is that if the missing months are predominantly cold months, this would create an apparent warming of the yearly average. If hot months, a cooling. Since there are operational reasons why foul weather readings are more likely to be missing than fair weather, there is the likelihood of a data skew here.

A second and less obvious issue is that some stations were established long before others. Say there are two stations in the same region, the first in a valley set up in 1880 and a second on a mountain set up in 1920, then because the mountain is cooler than the valley, adding the new station to the average will create an apparent cooling of post-1920 figures.

Consequentially, climate researchers see the need to further adjust the regional monthly averages submitted to them, to allow for these effects. Part of this adjustment process is the insertion of estimated values for missing data. I'm not familiar with the exact process by which these adjustments are arrived at, but I'm told it relies on comparisons with nearby station data.

In principle these adjustments are indeed necessary to gain an accurate picture of what long-term trends exist. However, they also introduce a 'joker to the pack' in that no-one outside of the climate labs knows exactly what they entail, and therefore no-one knows if the changes made are done in the spirit of science, or in the name of promoting a political agenda. THAT is the problem. By this stage in the process, we are no longer talking about data which can be traced back to a verifiable source. What we have, is more in the nature of an opinion.

Added to this, the monthly data still contains a high level of randomicity. The random fluctuations exceed the size of the measured warming trend by several times over. In science terms, this is known as trying to take a measurement which is below the noise floor of the system in which it exists. Measurements taken from below the noise floor in any system are typically regarded as dubious at best, and only to be resorted-to if no better measurements are available.Unsmoothed raw data

In order to extract the trend from this extremely noisy data, smoothing of some kind has to be applied.  There are a number of mathematical smoothing formulae which can be applied, and the results will depend on which is chosen. With slightly noisy data the various smoothing formulae would yield mostly similar results, but with extremely noisy data that might not be the case. So again, the final graph we see is as much a product of opinion as to which smoothing formula should be applied, as it is a product of actual thermometer readings.

In a highly politicised area of science such as climate change, any step where opinions matter as much as raw data, is bound to be at risk of confirmation bias, whereby the researcher (perhaps subconsciously) selects the processing method that gives the results which he or she would like to see.

OK. So, we've laid down the groundrules for what we're trying to achieve. Let's take a look at some actual data. The USHCN land temperature data for the United States is available to the public in both raw and adjusted form, so it's a good starting point. As supplied the data is in a rather unusual tabular format which does not suit direct importing into a spreadsheet or whatever, so it proved to be more effective to write a program for the purpose. I prefer the AutoIt language for this kind of manipulation, but this time I've used PHP as it's a more widely-accepted standard.

The way I've processed the data here is to take a given station's data and work out an average temperature for each year for which there is data. Working in complete years should mostly eliminate the seasonal cycle from the results. I've then summed the year temperatures from all stations, and divided by the number of records available.

In the case of raw data, periods with no data simply result in one less for the number of records, such that the average of (for example) 11 months will still be one eleventh of the sum. This doesn't of course compensate for any winter/summer bias caused by the missing data, but then the objective is to avoid any adjustments.

For the adjusted data we take two series, one including all values, and one only including values which USHCN have NOT listed as either estimates or having questionable accuracy.

In all cases the result is still very noisy, so we then apply a five-year rolling average.  For the given reasons I'd rather not do this, but the unsmoothed data would be much less clear. A close inspection of the unsmoothed data shows that it any case contains the same features, so I think we can call this a satisfactory approach.

I was kinda excited and a little apprehensive, anticipating what we might see when the raw and full adjusted datasets were fed into a graphing utility. Actually, I was fully expecting to find that the claims of data tampering were a hoax. However, I recognised the smoking gun nature of this particular piece of analysis, in that it has the potential to invalidate the entire climate change hypothesis, lock, stock and barrel.  (OK, that's enough of the gun metaphors. Mind you, this is about the USA, so maybe a few more are in order...)

A comparison of raw and adjusted datasets, USA-wide average.

So yes, my results DO match those of the earlier article alleging data tampering.

Point that immediately strikes you is that with the exception of post-2000, all adjustments are in the direction of cooling. Which would imply that the older raw data always reads too hot, and the post-2000 data too cool. I suppose that could arise through the mechanisms I've listed, but it does seem incredibly convenient in that it creates a warming trend over the last Century which matches that predicted by climate alarmists. Too, convenient.

Meanwhile the raw data, if taken as recorded, indicates that the hottest year occurred in the 1930's. It also indicates that the 2016 El Nino peak was less intense than an earlier peak around 2001.  In short, the unadjusted raw data shows that if there was any 'global warming' then its signal was completely absent from the actual recorded data, and only existed in the adjusted form of that data.

Comparing the adjusted data as published with the same data minus the estimated values, we see that the estimated values have a very significant effect on the published temperatures. Thus, there being no objective way to state what the estimated values should be, what we are dealing with here is more in the nature of an opinion voiced by climate scientists as to what temperatures ought to have been, than actual recorded facts.

The effect of the estimated values, compared to only adjusted real data.

Is this data acceptable proof of climate change by the standards of science, you ask? I would say, resoundingly, No. In order to turn it into proof, the climate scientists would have to explain why each and every adjustment was made.

Of course, this data only covers the USA, and at that, only land temperatures. Therefore it doesn't make for definitive proof that data 'adjustment' is behind all reports of climate change. In order to uncover the truth on that one we'd have to dig down a lot deeper, by analysing raw and adjusted data from multiple world regions.

In view of the importance of this to human society and the trillions of our money at stake, I really think we need to press for all data for the whole world to be made publicly available, and for an independent inquiry into the adjustment and estimation methods used. Only then can we get anywhere near to the truth of the matter. 

Public access to the data is important, because it makes it far more likely that any deliberately introduced bias will be detected. Therefore the temptation to introduce such a bias to satisfy a political agenda, is reduced.

Ah, but you may ask, do not the satellite observations indicate a warming trend, and surely they don't have missing months in the data to allow for 'estimation' exercises?

Well, yes, the satellite datasets do indicate a warming trend, with 2000+ being warmer than the latter part of the 20thC, but a smaller one than the surface data. Also, these aren't straight temperature readings either. One imagines the satellite pointing some hi-tech gadget at the surface, a temperature reading being taken, and this being radioed to the ground. Unfortunately, no. The complicating factors are different than for land thermometers, but they nevertheless exist.

There are two main satellite temperature datasets, UAH from the University of Alabama in Huntsville, and RSS from Remote Sensing systems in California. The methodology used by RSS is explained in a fair amount of detail here.

"Calculating TB from raw radiometer counts is a complex, multi-step process in which a number of effects must be accurately characterized and adjustments made to account for them.  These effects include radiometer non-linearity, imperfections in the calibration targets, emission from the primary antenna, and antenna pattern adjustments. "

I think it becomes clear that whilst satellite measurements are potentially more reliable, they are also not a definitive resource.

Bottom line is that measuring the Earth's overall temperature to a resolution of the fraction of a degree needed to detect climate change, whilst the temperature in any given location can vary by 20 to 40 times as much, is an exceedingly hard thing to do. Can it be done at all? Your call on that one.

Analysis program: ushcn-analysis-program.zip

Requires PHP 5.3 or later. Note that it will take about a minute to run the full analysis on a typical computer.