Temperature Tampering Temper Tantrums

By Christopher Monckton of Brenchley – Re-Blogged From WUWT

Commenters on my recent threads explaining the gaping error my team has found in official climatology’s definition of “temperature feedback” have asked whether I will update my series pointing out the discrepancy between the overblown predictions in IPCC’s First Assessment Report of 1990 on which the climate scam was based and the far less exciting reality, and revealing some of the dodgy tricks used by the keepers of the principal global-temperature datasets to make global warming look worse than they had originally reported.

I used to use the RSS satellite dataset as my chief source, because it was the first to publish its monthly data. However, in November 2015, when that dataset had showed no global warming for 18 years 9 months, Senator Ted Cruz displayed our graph of RSS data demonstrating the length of the Pause during a U.S. Senate hearing and visibly discomfited the “Democrats”, who wheeled out an Admiral, no less, to try – unsuccessfully – to rebut it. I predicted in this column that Carl Mears, the keeper of that dataset, would in due course copy all three of the longest-standing terrestrial datasets –GISS, NOAA and HadCRUT4 – in revising his dataset in a fashion calculated to eradicate the long Pause by showing a great deal more global warming in recent decades than the original, published data had shown.


[Fig 1.] The least-squares linear-regression trend on the pre-revision RSS satellite monthly global mean surface temperature anomaly dataset showed no global warming for 18 years 9 months from February 1997 to October 2015, though one-third of all anthropogenic forcings had occurred during the period of the Pause. Ted Cruz baited Senate “Democrats” with this graph in November 2015.

Sure enough, the very next month Dr Mears (who uses the RSS website as a bully-pulpit to describe global-warming skeptics as “denialists”) brought his dataset kicking and screaming into the Adjustocene by duly tampering with the RSS dataset to airbrush out the Pause. He had no doubt been pestered by his fellow climate extremists to do something to stop the skeptics pointing out the striking absence of any global warming whatsoever during a period when one-third of Man’s influence on climate had arisen. And lo, the Pause was gone –


[Fig 2.] Welcome to the Adjustocene: RSS adds 1 K/century to what had been the Pause

As things turned out, Dr sMear need not have bothered to wipe out the Pause. A large el Niño Southern Oscillation did that anyway. However, an interesting analysis by Professor Fritz Vahrenholt and Dr Sebastian Lüning (at diekaltesonne.de/schwerer-klimadopingverdacht-gegen-rss-satellitentemperaturen-nachtraglich-um-anderthalb-grad-angehoben) concludes that his dataset, having been thus tampered with, can no longer be considered reliable. The analysis sheds light on how the RSS dataset was massaged. The two scientists conclude that the ex-post-facto post-processing of the satellite data by RSS was insufficiently justified –


[Fig 3.] RSS monthly global mean lower-troposphere temperature anomalies, January 1979 to June 2018. The untampered version is in red; the tampered version is in blue. Thick spline-curves represent the simple 37-month moving averages. Graph by Professor Ole Humlum from his fine website at www.climate4you.com.

RSS racked up the previously-measured temperatures from 2000 on, increasing the overall warming rate since 1979 by 0.15 K, or about a quarter, from 0.62 K to its present 0.77 K –


[Fig 4.]

You couldn’t make it up, but Lüning and Vahrenholt find that RSS did

The year before the RSS data were Mannipulated, RSS had begun to take a serious interest in the length of the Pause. Dr Mears discussed it in his blog at remss.com/blog/recent-slowing-rise-global-temperatures. His then results are summarized below –


[Fig 5.]  (Orig Figure T1) Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014.

Dr Mears had a temperature tantrum and wrote:

“The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.”

Dr Mears conceded the growing discrepancy between the RSS data and the models, but he alleged we had “cherry-picked” the start-date for the global-temperature graph:

“Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”

In fact, the spike caused by the el Niño of 1998 was almost entirely offset by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Pause itself.


[Fig 6.] Graphs by Werner Brozek and Professor Brown for RSS and GISS temperatures starting both in 1997 and in 2000. For each dataset the trend-lines are near-identical. Thus, the notion that the Pause was caused by the 1998 el Niño is false.

The above graph demonstrates that the trends in global temperatures shown on the pre-tampering RSS dataset and on the GISS dataset were exactly the same before and after the 1998 el Niño, demonstrating that the length of the Pause was enough to nullify its imagined influence.

It is worth comparing the warming since 1990, taken as the mean of the four Adjustocene datasets (RSS, GISS, NCEI and HadCRUT4: first graph below), with the UAH dataset that Lüning and Vahrenholt commend as reliable (second graph below) –


[Fig 7.] Mean of the RSS, GISS, NCEI and HadCRUT4 monthly global mean surface or lower-troposphere temperature anomalies, January 1990 to June 2018 (dark blue spline-curve), with the least-squares linear-regression trend on the mean (bright blue line), compared with the lesser of two IPCC medium-term prediction intervals (orange zone).


[Fig 8.] RSS lower-troposphere anomalies and trend for January 1990 to June 2018

It will be seen that the warming trend in the Adjustocene datasets is almost 50% greater over the period than that in the RSS dataset that Lüning and Vahrenholt find more reliable.

After the adjustments, the RSS dataset since 1990 now shows more warming than any other dataset, even the much-tampered-with GISS dataset –


[Fig 9.]  Centennial-equivalent global warming rates for January 1990 to June 2018. IPCC’s two mid-range medium-term business-as-usual predictions and our revised prediction based on correcting climatology’s error in defining temperature feedback (white lettering) are compared with observed centennial-equivalent rates (blue lettering) from the five longest-standing datasets.

Note that RSS’ warming rate since 1990 is close to double that from UAH, which had revised its global warming rate downward two or three years ago. Yet the two datasets rely upon precisely the same satellite data. The difference of almost 1 K/century in the centennial-equivalent warming rate shows just how heavily dependent the temperature datasets have become on subjective adjustment rather than objective measurement.

Should we cynically assume that these adjustments – up for RSS, GISS, NCEI and HadCUT4, and down for UAH – reflect the political prejudices of the keepers of the datasets? Lüning and Vahrenholt can find no rational justification for the large and sudden alteration to the RSS dataset so soon after Ted Cruz had used our RSS graph of the Pause in a Senate hearing. However, they do not find the UAH data to have been incorrectly adjusted. They commend UAH as sound.

The “MofB” hindcast is based on two facts: first, that we calculate Charney sensitivity to be just 1.17 K per CO2 doubling, and secondly that in many models the predicted equilibrium warming from doubled CO2 concentration, the “Charney sensitivity”, is approximately equal to the predicted transient warming from all anthropogenic sources over the 21st century. This is, therefore, a rather rough-and-ready prediction: but it is more consistent with the UAH dataset than with the questionable Adjustocene datasets.

The extent of the tampering in some datasets is enormous. Another splendidly revealing graph from the tireless Professor Humlum, who publishes a vast range of charts on global warming in his publicly-available monthly report at climate4you.com –


[Fig 10.] Mann-made global warming: how GISS boosted apparent warming by more than half.

GISS, whose dataset is now so politicized as to render it valueless, sMeared the data over a period of less than seven years from March 2010 to December 2017 so greatly as to increase the apparent warming rate over the 20th century by just over half. The largest change came in March 2013, by which time my monthly columns here on the then long-running Pause had already become a standing embarrassment to official climatology. Only the previous month, the now-disgraced head of the IPCC, railroad engineer Pachauri, had been one of the first spokesmen for official climatology to admit that the Pause existed. He had done so during a speech in Melbourne that was reported by just one newspaper, The Australian, which has long been conspicuous for its willingness faithfully to reflect both sides of the climate debate.

What is fascinating is that, even after the gross data tamperings towards the end of the Pause by four of the five longest-standing datasets, and even though the trend on all datasets is also somewhat elevated by the large el Niño of a couple of years ago, IPCC’s original predictions from 1990, the predictions that got the scare going, remain egregiously excessive.

Even IPCC itself has realized how absurd its original predictions were. In its 2013 Fifth Assessment Report, it abandoned its reliance on models for the first time, substituted what it described as its “expert judgment” for their overheated outputs, and all but halved its medium-term prediction. Inconsistently, however, it carefully left its equilibrium prediction – 1.5 to 4.5 K warming per CO2 doubling – shamefully unaltered.

IPCC’s numerous unthinking apologists in the Marxstream media have developed a Party Line to explain away the abject predictive failure of IPCC’s 1990 First Assessment Report and even to try to maintain, entirely falsely, that “It’s worser than what we ever, ever thunk”.

One of their commonest excuses, trotted out with the glazed expression, the monotonous delivery and the zombie-like demeanor of the incurably brainwashed, is that thanks to the UN Framework Convention on Global Government Climate Change the reduction in global CO2 emissions has been so impressive that emissions are now well below the “business-as-usual” scenario A in IPCC (1990) and much closer to the less extremist scenario B.

Um, no. Even though official climatology’s CO2 emissions record is being hauled into the Adjustocene, in that it is now being pretended that – per impossibile – global CO2 emissions are unchanged over the past five years, the most recent annual report on CO2 emissions shows them as near-coincident with the “business-as-usual” scenario in IPCC (1990) –


[Fig 11.] Global CO2 emissions are tracking IPCC’s business-as-usual scenario A

When that mendacious pretext failed, the Party developed an interesting fall-back line to the effect that, even though emissions are not, after all, following IPCC’s Scenario B, the consequent radiative forcings are a lot less than IPCC (1990) had predicted. And so they are. However, what the Party Line is very careful not to reveal is why this is the case.

The Party realized that its estimates of the cumulative net anthropogenic radiative forcing from all sources were high enough in relation to observed warming to suggest a far lower equilibrium sensitivity to radiative forcing than originally decreed. Accordingly, by the Third Assessment Report IPCC had duly reflected the adjusted Party Line by waving its magic wand and artificially and very substantially reducing the net anthropogenic forcing by introducing what Professor Lindzen has bluntly called “the aerosol fudge-factor”. The baneful influence of this fudge-factor can be seen in IPCC’s Fifth Assessment Report –


[Fig 12.] Fudge, mudge, kludge: the aerosol fudge-factor greatly reduces the manmade radiative forcing and falsely boosts climate sensitivity (IPCC 2013, fig. SPM.5).

IPCC’s list of radiative forcings compared with the pre-industrial era shows 2.29 Watts per square meter of total anthropogenic radiative forcing relative to 1750. However, this total would have been considerably higher without the two aerosol fudge-factors, totaling 0.82 Watts per square meter. If two-thirds of this total is added back, as it should be, for anthropogenic aerosols are as nothing to such natural aerosols as the Saharan winds that can dump sand as far north as Scotland, the net anthropogenic forcing becomes 2.85 Watts per square meter. Here is how that makes a difference to apparent climate sensitivity –

clip_image026[4] clip_image028[4]

[Fig 13.] How the aerosol fudge-factor artificially hikes the system-gain factor A.

In the left-hand panel, the reference sensitivity (the anthropogenic temperature change between 1850 and 2010 before accounting for feedback) is the product of the Planck parameter 0.3 Kelvin per Watt per square meter and IPCC’s 2.29 W m–2 mid-range estimate of the net anthropogenic radiative forcing in the industrial era to 2011: i.e., 0.68 K.

Equilibrium sensitivity is a little more complex, because official climatology likes to imagine (probably without much justification) that not all anthropogenic warming has yet occurred. Therefore, we have allowed for the mid-range estimate in Smith (2015) of the 0.6 W m–2 net radiative imbalance to 2009, converting the measured warming of 0.75 K from 1850-2011 to an equilibrium warming of 1.02 K.

The system-gain factor, using the delta-value form of the system-gain equation that is at present universal in official climatology, is the ratio of equilibrium to reference sensitivity: i.e. 1.5. Since reference sensitivity to doubled CO2, derived from CMIP5 models’ data in Andrews (2012), is 1.04 K, Charney sensitivity is 1.5 x 1.04 or 1.55 K.

In the right-hand panel, just over two-thirds of the 0.82 K aerosol fudge-factor has been added back into the net anthropogenic forcing, making it 2.85 K. Why add it back? Well, without giving away too many secrets, official climatology has begun to realize that the aerosol fudge factor is very much too large. It is so unrealistic that it casts doubt upon the credibility of the rest of the table of forcings in IPCC (1990, fig. SPM.5). Expect significant change by the time of the next IPCC Assessment Report in about 2020.

Using the corrected value of net anthropogenic forcing, the system-gain factor falls to 1.13, implying Charney sensitivity of 1.13 x 1.04, or 1.17 K.

Let us double-check the position using the absolute-value equation that is currently ruled out by official climatology’s erroneously restrictive definition of “temperature feedback” –

clip_image030[4] clip_image032[4]

[Fig 14.] The system-gain factor for 2011: (left) without and (right) with fudge-factor correction

Here, an important advantage of using the absolute-value system-gain equation ruled out by official climatology’s defective definition becomes evident. Changes in the delta values cause large changes in the system-gain factor derived using climatology’s delta-value system-gain equation, but very little change when it is derived using the absolute-value equation. Indeed, using the absolute-value equation the system gain factors for 1850 and for 2011 are just about identical at 1.13, indicating that under modern conditions non-linearities in feedbacks have very little impact on the system-gain factor.

Bottom line: No amount of temperature-tampering tantrums will alter the fact that, whether one uses the delta-value equation (Charney sensitivity 1.55 K) or the absolute-value equation (Charney sensitivity 1.17 K), the system-gain factor is small and, therefore, so are equilibrium temperatures.

Finally, let us enjoy another look at Josh’s excellent cartoon on the Adjustocene –


[Fig 15.]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s