Weekly Climate and Energy News Roundup #405

The Week That Was: April 4, 2020, Brought to You by www.SEPP.org

By Ken Haapala, President, Science and Environmental Policy Project

Quote of the Week: “If I set forth a concrete proposal in all its particulars, I expose myself to a hundred criticisms on points not essential to the principle of the plan. If I go further in the use of figures for illustration, I am involved more and more in guesswork; and I run the risk of getting the reader bogged in details which may be inaccurate and could certainly be amended without injury to the main fabric.

“Yet if I restrict myself to generalities, I don’t give the reader enough to bite on; and am in fact shirking the issue, since the size, the order of magnitude, of the factors involved isn’t an irrelevant detail.”. – John Maynard Keynes [H/t Kenneth Button in WSJ]

Number of the Week: 20% Loss

30 Years: March 30 marked the 30th anniversary of the publication of “Precise Monitoring of Global Temperature Trends from Satellites” by Roy Spencer and John Christy in Science Magazine. The abstract reads:

“Passive microwave radiometry from satellites provides more precise atmospheric temperature information than that obtained from the relatively sparse distribution of thermometers over the earth’s surface. Accurate global atmospheric temperature estimates are needed for detection of possible greenhouse warming, evaluation of computer models of climate change, and for understanding important factors in the climate system. Analysis of the first 10 years (1979 to 1988) of satellite measurements of lower atmospheric temperature changes reveals a monthly precision of 0.01°C, large temperature variability on time scales from weeks to several years, but no obvious trend for the 10-year period. The warmest years, in descending order, were 1987, 1988, 1983, and 1980. The years 1984, 1985, and 1986 were the coolest.” [Boldface added].

Initially, the satellite data showed a slight cooling trend in the atmosphere. Very slight air drag, even at the altitude of the satellites, changed the orbits of the satellites so that the field of view of the on-board microwave sounding units change, thereby causing the readings to show a cooling trend. When this problem was called to their attention, Spencer and Christy publicly acknowledged these errors and corrected them. The data date back to December 1978. Now we have over 40 years of atmospheric temperature trends which are published monthly. These are independently verified by temperature trends measured by different instruments on weather balloons. These are the finest global temperature trends existing. Nothing else comes close. As a further check on their measurements, there is now a satellite that has jets to assure that the satellite remains in the same orbit. The process is a classic example of using the scientific method to correct errors.

A goal in measurement is to measure as directly and comprehensively as possible. Use of proxies, such as surface-air measurements, is less desirable because other events may interfere with measurements, such as changes in land use, particularly urbanization. Indeed, accurate atmospheric temperature trends may contain spurious trends due to such things as El Niños (causing warming) and volcanoes (emitting gases and particles that cause cooling. Thus, the trend must be adjusted for these external events unrelated to the human-caused increase in atmospheric carbon dioxide.

Christy and his colleague Richard McNider did so in 1994 and again in 2017 when testing global climate models against observations. Their observations, corrected for ENSO and volcanism, show a warming of about 0.09ºC per decade. This is about one-fourth of the warming shown by US models. [They did not adjust for changing solar influence or possible changes in the Pacific Decadal Oscillation.]

The estimates are close to what William van Wijngaarden and his colleagues estimated when using a different approach and different data that doubling the greenhouse gases in troposphere (CO2, methane, N2O) and increasing water vapor by about 6% will result in an increase in temperatures about 1 to 1.5 ºC. The Christy/McNider analysis does not adjust for the predicted and experimentally observed effect that the influence of each incremental increase of greenhouse gases on temperatures declines dramatically as greenhouse gases in the atmosphere increase.

In his essay, Spencer discusses some of the background and support he and Christy received in developing their analysis using satellite data and assuring proper calibration. Also, he discusses how certain individuals and groups tried to suppress publicizing the findings, such as preventing a visit to the White House during the George H. W. Bush Administration. Spencer writes:

“As the years went by, we would learn that the lack of substantial warming in the satellite data was probably hurting NASA’s selling of ‘Mission to Planet Earth’ to Congress. The bigger the perceived problem, the more money a government agency can extract from Congress to research the problem. At one point a NASA HQ manager would end up yelling at us in frustration for hurting the effort.”

“In 2001, after tiring of being told by NASA management what I could and could not say in congressional testimony, I resigned NASA and continued my NASA projects as a UAH employee in the Earth System Science Center, for which John Christy serves as director (as well as the Alabama State Climatologist).”

It seems that regardless of which political party is in the White House, NASA management wishes to filter information to promote a threat or a crisis in climate for which they need money to understand and solve. Of course, the ultimate “climate crisis” promoter today is the UN Intergovernmental Panel on Climate Change IPCC). See links under Challenging the Orthodoxy and TWTW May 25, 2019 & June 22, 2019.


Testing Models Against Appropriate Solid Evidence: The current health issues presented by COVID-19 illustrate why using appropriate evidence and appropriate models is necessary for effective government policy. There are numerous models for infectious diseases, such as the one used by the Imperial College, discussed in last week’s TWTW. Others include the Murray Model used at the University of Washington’s Institute for Health Metrics and Evaluation.

The results of these models as to the outcomes from infectious diseases can be no better than the data, evidence, fed into them. Until the solid data is obtain on how quickly the disease can spread and how lethal it is, government policies to address the disease are being established with great uncertainty. For example, the flu of 1918, the so-called Spanish Flu, killed 17 to 50 million people. Many of them were young and healthy. COVID-19 appears to be particularly deadly for the elderly and those with other conditions such as cardiovascular disease, diabetes, hypertension, and other respiratory disease. Thus, models built for the 1918 flu are inappropriate for COVID-19.

Based on data compiled by worldometers, (4/5/20) the deaths per million of population vary widely. For the USA it is 28, for Spain 266, for Italy 263, and France 116. For Sweden, which has taken the “herd immunity” approach with no closure of businesses and social gatherings the reported deaths per million are 40. With great uncertainties, these numbers do not give future trends. For example, Sweden’s policies may change.

The uncertainty of the effects of the disease brings up a dilemma for health care professionals who understand the limits of their science. How much should they say when making comments to the public? This is similar to the dilemma expressed by Keynes in the Quote of the Week.

For many parts of the world, the reporting standards are extremely poor. China, where the disease may have originated, reported 2 deaths per million. Of course, some may blame the leadership of China for misleading the world about the disease. Writing in his blog Manhattan Contrarian, Francis Menton brings up an important point. China is an authoritarian country. Those bearing bad news may be punished.

As described above on atmospheric temperature trends, could it be that those bearing good news of no dangerous global warming are punished and their research ignored? See links under Models v. Observations, Health, Energy, and Climate, Other News that May Be of Interest, Articles # 1 and #2 and https://www.worldometers.info/coronavirus/#countries


ENSO: There have been some efforts to claim that the El Niño Southern Oscillation is being changed by increasing carbon dioxide. A reconstruction of Sea Surface Temperatures using corals questions such an assertion. There is nothing new about the current frequency of El Niños. The abstract of a new paper on ENSO by Lawman, et al. reads:

Climate model simulations of El Niño‐Southern Oscillation (ENSO) behavior for the last millennium demonstrate interdecadal to centennial changes in ENSO variability that can arise purely from stochastic processes internal to the climate system. That said, the instrumental record of ENSO does not have the temporal coverage needed to capture the full range of natural ENSO variability observed in long, unforced climate model simulations. Here we demonstrate a probabilistic framework to quantify changes in ENSO variability via histograms and probability density functions using monthly instrumental and coral‐based sea‐surface temperature (SST) anomalies from 1900‐2005 CE and 1051‐1150 CE. We find that reconstructed SST anomalies from modern corals from the southwest Pacific capture changes in ENSO variability that are consistent with instrumental SST data from the central equatorial Pacific. Fossil coral records indicate one hundred years of relatively lower ENSO variability during part of the Medieval Climate Anomaly. Our results demonstrate that periods of reduced ENSO variability can last a century, far longer in duration than modern observations in the instrumental record of ENSO, but consistent with results from unforced climate model simulations. [Boldface added.]

There is no logical reason to assume that frequency of El Niños are a new occurrence caused by increasing carbon dioxide. See links under Changing Climate.


Antarctic Forests: Traces of ancient forests have been discovered about 900 km (600 miles) from the South Pole. Of course, much is being made of this, and that atmospheric CO2 levels were higher during mid-Cretaceous period, 115 to 80 million years ago, and the area was warmer, similar to New Zealand. The researchers used a model to estimate that the temperatures required a CO2 concentration of 1120 to 1630 parts per million (ppm), compared with today’s concentration of about 410 ppm. This is consistent with estimates of CO2 by others for the same period.

What the researchers failed to check were the geographic changes to West Antarctica, where the research was conducted. According to the web site of the British Geographical Society and the British Antarctic Survey, geologically East and West Antarctica are dramatically different. East Antarctica is an area of continental shield, with some rocks exceeding 3 billion years in age. From the web site:

“East and West Antarctica are quite distinct from each other in terms of their geology. East Antarctica is much larger and is an area of continental shield (or ‘craton’) composed of ancient igneous and metamorphic rocks, some exceeding 3 billion years in age. Overlying the ancient continental shield rock in various places are younger sedimentary rocks (e.g. sandstones, limestones, shales, and coal) which formed at different times under different environmental conditions. For example, coal beds exposed in the Transantarctic Mountains formed through the accumulation of plant matter during the Permian Period (290 to 245 myr ago) when the continent had a warm temperate climate. Over millions of years this organic material was buried; and by compaction under the weight of overlying sediment, it was eventually turned into coal.

“The geology of West Antarctica has much in common with the geology of the Andes. In fact, this side of Antarctica owes its origin to the same mountain building processes that uplifted the western side of South America. During the early Jurassic (around 200 myr ago) oceanic crust began to subduct beneath the Pacific margin of Gondwana. The resulting subduction zone extended along the margin of what is now South America and West Antarctica.”

As usual, those fascinated with their models to find climate change assume the earth is stable. It is not. See links under Changing Climate and Changing Earth.




SEPP is conducting its annual vote for the recipient of the coveted trophy, The Jackson, a lump of coal. Readers are asked to nominate and vote for who they think is most deserving, following these criteria:

· The nominee has advanced, or proposes to advance, significant expansion of governmental power, regulation, or control over the public or significant sections of the general economy.

· The nominee does so by declaring such measures are necessary to protect public health, welfare, or the environment.

· The nominee declares that physical science supports such measures.

· The physical science supporting the measures is flimsy at best, and possibly non-existent.

The eight past recipients, Lisa Jackson, Barrack Obama, John Kerry, Ernest Moniz, John Holdren, Gena McCarthy, Jerry Brown, and Christiana Figueres are not eligible. Generally, the committee that makes the selection prefers a candidate with a national or international presence. The voting will close on June 30. Please send your nominee and a brief reason why the person is qualified for the honor to Ken@SEPP.org. Thank you.


Number of the Week: 20% Loss: As the madness to build wind power increases in many states, there are numerous proposals on how to store electricity when generated in excess in order to have it when needed. The only method successfully tested on a commercial scale is pumped-hydro-storage, which was first used in Switzerland in 1907. According to the EIA:

“Pumped-storage currently accounts for 95% of all utility-scale energy storage in the United States.”

In a 2013 report, based upon reports submitted by utilities the EIA estimated that:

“In 2011, pumped storage plants produced 23 billion kilowatthours (kWh) of gross generation—roughly as much as petroleum-fired generation in that year. Pumped storage plants, however, consumed 29 billion kilowatthours (kWh) of electricity in 2011 to refill their storage reservoirs, resulting in a net generation loss of 6 billion kWh.”

The net generation loss of 6 billion kWh works out to about 20% of the total electricity pumped storage plants use to refill their reservoirs.

Two things are missing in this simple analysis: 1) elevation difference between the reservoir and discharge area (reservoir or open area) and 2) the need for reliable electricity to refill the reservoir. Apparently, the largest facility in the world is in Bath County, Virginia. The two reservoirs have an elevation difference of about 1,260 feet (380 meters) with significant fluctuations during operation.

The key is reliable refill. The Bath County system is refilled by electricity generated by coal and nuclear power. As TWTW has discussed in the past, there are few places in the world with sparsely populated areas where reservoirs large enough can be built to accommodate major utilities using wind power alone. See links under Alternative, Green (“Clean”) Energy – Storage.



1. When a Virus Spreads Exponentially

The key to stopping the Covid-19 pandemic lies in lowering the rate at which infections multiply.

By Eugenia Cheng, WSJ, Apr 2, 2020


The journalist writes:

“Fighting a pandemic like Covid-19 requires experts in many fields: epidemiologists who study the spread of disease, doctors who treat the sick, scientists who work on finding a vaccine. There is math involved in all of these specialties, but math can also help us to make sense of the barrage of information that we’re receiving daily.

“The starting point is the math of exponential growth. The word “exponential” is sometimes used informally to mean “really fast,” but mathematically it means something very specific: that a quantity is repeatedly multiplied by the same number. When a virus spreads, each infected person goes on to infect a certain number of other people, on average; this is called the reproduction number or R0. Then each newly infected person goes on to infect R0 people, again on average.

“Exponential growth is dangerous, because if each person infects more than one other person, the spread of disease quickly becomes overwhelming. Multiplying by 3, for instance, it only takes 21 steps to reach 10 billion, more than the current population of the world. We start with very low numbers that seem insignificant, but it’s not the absolute numbers that matter, it’s the rate at which they’re increasing, which also increases exponentially. Waiting until an infectious disease feels like a problem is too late to start addressing it.

“One important feature of exponential growth is that it’s not helpful to look at the number of new cases each day. Exponentials increase by multiplication, so it’s more relevant to look at the percentage increase each day. This is what “flattening the curve” is about: reducing the rate of multiplication. Eventually we need the rate to be less than one, so that each infected person infects fewer than one new person, producing exponential decay instead of growth.

“Contrary to optimistic hopes, there is no guarantee that the coronavirus will just peter out on its own—at least, not until so much of the population is already infected that there’s simply a lack of new people to infect. That is the worst-case scenario. The aim of intervention is to reduce R0 before that.”

After a discussion of prudent actions, the journalist concludes:

“Math can’t accurately predict the future of the Covid-19 pandemic, partly because we don’t have accurate data about the true number of infections and partly because so much beyond math is involved. We can’t predict how human beings will behave, nor can we quantify how much difference that makes. There is a range of possible outcomes, and the one that we end up with is almost certain to be better than the projected worst-case scenario—that’s the whole point of a worst-case scenario. How much better depends in part on our behavior, and our behavior should take the math of exponentials into account.”


2, Coronavirus Models of Uncertainty

The range of outcomes is large and the worst case far from likely.

Editorial, WSJ, Mar 30, 2020


The editorial states:

“President Trump on Sunday said he was persuaded by coronavirus pandemic models to extend national social distancing guidelines through the end of April. The White House plans to lay out the data and assumptions behind the decision on Tuesday, and we look forward to that. Meanwhile, we thought readers might appreciate a dive into one of the more prominent Covid-19 models to look at the wide range of outcomes—and how much uncertainty there continues to be.”

“White House coronavirus coordinator Deborah Birx said its assessment of how the pandemic would unfold closely mirrors the University of Washington’s Institute for Health Metrics and Evaluation, the so-called Murray model. That group last week estimated 81,114 deaths over the next four months, with 95% confidence that the number would be between 38,242 and 162,106.”

[SEPP Comment: How many readers of the WSJ understand what a 95% confidence means or how it refers to 2 sigmas on a Gaussian distribution curve, which may not apply.

“That’s a terrible human toll and would be about three times more fatal than the average seasonal flu. But the good news is this fatality forecast is much lower than the 2.2 million that the President suggested as a worst case. Estimates could still shift significantly, and the Murray group plans to update its model as more data flow in from the states and other countries.

“Importantly, the Murray model measures deaths in terms of population rather than confirmed cases since testing varies geographically. It also extrapolates U.S. fatalities based on evidence from other hot spots and Wuhan in China after government lockdowns. One important data point: It took 27 days after strict social distancing was implemented in Wuhan before daily deaths peaked. New York, California and other states that took early action to close non-essential businesses are merely starting week three.

“Data out of China may not be reliable, and the Murray study underlines that “modeling for US states based on one completed epidemic, at least for the first wave, and many incomplete epidemics is intrinsically challenging.” This is why the estimates are likely to change in the coming days and weeks.”

After presenting some projections from the Murray model, the editorial concludes:

“Statistical models are a crucial but not the only factor that political leaders should heed as they make decisions in the public interest. And none of these estimates assumes progress from new treatments that could reduce deaths and hospitalizations. Widespread testing—especially for antibodies for those who developed immunity, as Germany is planning to do—will also be valuable in planning an exit from the sledgehammer of our national lockdown.

“April is going to be a brutal month for America, and the next two weeks especially. But as the bad news arrives, it’s important to understand that the worst-case-scenarios that many in the media trumpet are far from a certain fate.”


One thought on “Weekly Climate and Energy News Roundup #405

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s