Weekly Climate and Energy News Roundup #350

The Week That Was: March 2, 2019, Brought to You by www.SEPP.org

By Ken Haapala, President, Science and Environmental Policy Project

Quote of the Week: “No government has the right to decide on the truth of scientific principles, nor to prescribe in any way the character of the questions investigated. Neither may a government determine the aesthetic value of artistic creations, nor limit the forms of literacy or artistic expression. Nor should it pronounce on the validity of economic, historic, religious, or philosophical doctrines. Instead it has a duty to its citizens to maintain the freedom, to let those citizens contribute to the further adventure and the development of the human race.” – Richard Feynman, “The Meaning of It All: Thoughts of a Citizen Scientist”.

Number of the Week: 99.99997% Certainty

It’s Not Real, It’s Puccini: Last week’s TWTW discussed that in order to fully enjoy certain types of art, such as opera and some movies, members of the audience must suspend reality. Similarly, to believe certain claims by climate scientists, one must suspend reality – including knowledge of nature. As if on cue, the Nature publishing group came out with two papers that require suspending reality and knowledge of nature.

One paper, “Celebrating the anniversary of three key events in climate change science in Nature Climate Change,” claims the authors discovered a distinct human fingerprint with extremely high precision. The principal author was Benjamin Santer with many co-authors including two with the Remote Sensing Service (RSS) that has incorporated surface temperature measurements with its calculations of atmospheric temperature trends, creating a vague, sloppy product by adding noise. In 1995, Santer claimed that a pronounced warming trend over the tropics was a distinct human fingerprint – a warming trend that has yet to be found. The new Santer paper is discussed in this section of TWTW.

The second paper, “Possible climate transitions from breakup of stratocumulus decks under greenhouse warming,” was published in Nature Geoscience. It will be discussed in a section below.

The abstract of the Santer paper states:

“Climate science celebrates three 40th anniversaries in 2019: the release of the Charney report, the publication of a key paper on anthropogenic signal detection, and the start of satellite temperature measurements. This confluence of scientific understanding and data led to the identification of human fingerprints in atmospheric temperature.”

As readers of TWTW realize, 40 years of comprehensive atmosphere temperature trends from satellites are reason to celebrate. But contrary to claims in the Santer paper they do not support the speculations in the 1979 Charney Report that increases in water vapor will greatly amplify the modest warming from carbon dioxide (CO2) demonstrated in laboratory experiments. Further, these atmospheric temperature trends do not support “the identification of human fingerprints in atmospheric temperature.” Perhaps the authors are using a debating technique described by Schopenhauer to baffle your opponent and the audience: when you have clearly lost your argument, suddenly state your opponent’s view is the one you have been advocating all along and declare victory!

The Santer paper was immediately rebuked on three levels: 1) Theoretical physics by string theorist Luboš Motl on his blog; 2) Statistics by Ross McKitrick on Climate Etc.; and 3) Physical measurement by Roy Spencer on his blog. Motl’s critique is most direct and goes to the absurdity of comparing climate science with precision in particle physics.

In the comments section of his post, Motl responds to a question that is very germane from a reader identified as Andreas, which explains Motl’s views: “Luboš, what kind of proof would you accept for man-made climate change?”

“Dear Andreas, I am a theorist and for theoretical reasons, I have no doubt that the mankind, and CO2 emissions in particular, affect the climate. Even the experimental proof was made in lab in the mid-19th century [by John Tyndall]. The question is how strong an effect it is in the real world – and the key point is that the contribution is negligible for all practical purposes according to all the available data.

“I would accept that there could be a problem if the warming rate sped up from 0.15 deg C a decade to more than 0.5 C a decade or something like that. This is not too much to ask. 0.5 C is still a tiny change and poses no threat. But if such a change doesn’t occur even within a cherry-picked decade, there just cannot be a problem and all the people who have caused the wasting of hundreds of billions of dollars must be held responsible for their acts.”

Motl made his original post after receiving a tweet from Gavin Schmidt, head of NASA-GISS, who claimed a five-sigma certainty in the findings in the new paper. This degree of precision requires extremely tight laboratory controls and was attained in the European Organization for Nuclear Research (CERN) experiment to find Higgs boson particle. Motl’s rebuttal is directed toward Schmidt but it applies to the paper as well. Boldface added

“He picks about 3 scientific teams and praises them for reaching the “gold standard” of science (which is how the journalists hype it) – a five-sigma proof of man-made global warming. The signal-to-noise ratio has reached some critical threshold, it’s those five-sigma, so the man-made climate change is proven at the same level at which we needed e.g. the Higgs boson to be discovered by CERN’s particle physicists.

“It sounds great except it’s complete nonsense. When we discover something at five-sigma, it means something that clearly cannot be the case in climatology. When we discover new physics at five-sigma, it means that we experimentally rule out a well-defined null hypothesis at the p-level of 99.9999% or so. Note that a “well-defined null hypothesis” is always needed to talk about “five sigma”.

“In the case of the man-made climate change discussion, there is clearly no such “well-defined null hypothesis”. In particular, when Schmidt and others discuss the “signal-to-noise ratio”, they don’t really know what part of the observed data is “noise” and how strong it should be. The assumption must be that the “noise” is some natural variability of the climate. But we don’t really have any precise enough and canonical enough model of the natural variability. The natural variability is undoubtedly very complex and has contributions from lots of natural and statistical phenomena and their mixtures. Cloud variations, irregular seasons, solar variability, volcanoes, even earthquakes, annual ocean cycles, decadal ocean cycles, centennial ocean cycles, 1500-year ocean cycles, irregularities in tropical cyclones, plants’ albedo variations, residuals from a way to compute the average, butterfly wings in China, and tons of other things.

“So, we can’t really separate the measured data to the “signal” and “noise”. Even if we knew the relevant definition of the natural noise, we just don’t know how large it was before the industrialization began. The arguments about the “hockey stick graph” are the greatest tangible proof of this statement. Some papers show the variability in 1000-1900 AD as 5 times larger than others – so “5 signa” could very well be “1 sigma” or something else.

“Just like before Schmidt’s tweet, it is perfectly possible that all the data we observe may be labeled “noise” and attributed to some natural causes. There may obviously be natural causes whose effect “n” [a symbol] the global mean temperature and other quantities is virtually indistinguishable from the effect expected from the man-made global warming.

“If the people observed some amazing high-frequency correlation between the changes of CO2 and the temperature, a great agreement between these two functions of time could become strong evidence of the anthropogenic greenhouse effect. But it’s clearly impossible because we surely can’t measure the effect of the tiny seasonal variations of the CO2 concentration – these variations are just a few ppm while the observed changes, seasons, are hugely pronounced and affected mostly by other things than CO2 (especially by the Sun directly).

“So, the growth of the CO2 was almost monotonic – and in recent decades, almost precisely linear. Nature may also add lots of contributions that change almost monotonically or linearly for a few decades. So, the summary is that Gavin Schmidt and his fellow fearmongers are trying to make the man-made climate science look like a hard science – perhaps even as particle physics – but it is not really possible for the climate science to be analogous to a hard science. The reason is that particle physics and hard sciences have nicely understood, unique, and unbelievably precise null hypotheses that may be supported by the data or refuted; while the climate science doesn’t have any very precise null hypotheses.

“At most, the attribution of the climate change is as messy a problem as the attribution of the discrepancies between Hubble’s constant obtained from various sources. It’s just not possible to make any reliable enough attribution because the number of parameters that we may adjust in our explanations is larger than the number of unequivalent values that are helpful for the attribution and that we may obtain from observations. In effect, the task to “attribute” is an underdetermined set of equations: the number of unknowns is larger than the number of known conditions or constraints that they obey (i.e. than the number of observed relevant data).

“Gavin Schmidt and everyone else who tries to paint hysterical climatology as a hard science analogous to particle physics is simply lying. Particle physics is a hard science and “five sigma proofs” are possible in it, climatology is a soft science and “five sigma proofs” in it are just marketing scams, and cosmology is somewhere in between. We all hope that cosmology will return closer to particle physics, but we can’t be sure.”

If we cannot separate the CO2 warming signal from the natural variability (noise), we cannot establish the extent of CO2 being a major cause. Yet, the UN Intergovernmental Panel on Climate Change (IPCC) and its dutiful followers such as the US Global Change Research Program (USGCRP) continue to ignore natural variability – the noise. Thus, they cannot separate the signal from the noise.

Ross McKitrick has similar comments as related to statistics. He realizes that statistical modeling cannot attain the precision needed by CERN and its Large Hadron Collider to find the Higgs boson. Asserting climate science has reached that degree of precision is fantasy.

Roy Spencer comments on the fantasy of claiming:

“that the 40-year record of global tropospheric temperatures agrees with climate model simulations of anthropogenic global warming so well that there is less than a 1 in 3.5 million chance (5 sigma, one-tailed test) that the agreement between models and satellites is just by chance.

This leads to the question of why the paper was published? Spencer may have the answer:

“In the end, I believe the study is an attempt to exaggerate the level of agreement between satellite (even UAH) and model warming trends, providing supposed “proof” that the warming is due to increasing CO2, even though natural sources of temperature change (temporary El Nino warming, volcanic cooling early in the record, and who knows what else) can be misinterpreted by their method as human-caused warming.”

The paper is unrealistic, but one item in the paper appears positive; at the end of the long acknowledgement sections there is a much-needed disclaimer:

“The views, opinions and findings contained in this report are those of the authors and should not be construed as a position, policy, or decision of the US Government, the US Department of Energy, or the National Oceanic and Atmospheric Administration.”

See links under Challenging the Orthodoxy, https://home.cern/ and https://home.cern/science/accelerators/large-hadron-collider

****************

William Happer – Climate Realist: The appointment of William Happer to a committee being formed to evaluate the threat to national security from carbon dioxide-caused climate change continues to garner praise and criticism. One of the “big” criticisms is that Happer is not a climate scientist. Will Happer is an AMO – Atomic, Molecular, and Optical – Physicist, and has decades of research in the field. The greenhouse effect is about the interaction between infrared (an optical thing) and CO2 molecules. The only thing that relates CO2 to climate is precisely that: AMO. Happer is a world’s expert, and to claim that he is not a climate scientist is to deny the AMO relationship. The IPCC etc. and most “climate scientists” do not understand this relationship.

If the publication discussed above is an example of climate science, then climate science is not suitable for evaluating national threats because it is a waste of resources to focus on threats identified by a false pretense of knowledge. SEPP board member Willie Soon sent TWTW an article by Happer written in 2011 in which he lays out his views of greenhouse gases and carbon dioxide. A few paragraphs set the tone:

“The message is clear that several factors must influence the earth’s temperature, and that while CO2 is one of these factors, it is seldom the dominant one. The other factors are not well understood. Plausible candidates are spontaneous variations of the complicated fluid flow patterns in the oceans and atmosphere of the earth—perhaps influenced by continental drift, volcanoes, variations of the earth’s orbital parameters (ellipticity, spin-axis orientation, etc.), asteroid and comet impacts, variations in the sun’s output (not only the visible radiation but the amount of ultraviolet light, and the solar wind with its magnetic field), variations in cosmic rays leading to variations in cloud cover, and other causes.

“Let me summarize how the key issues appear to me, a working scientist with a better background than most in the physics of climate. CO2 really is a greenhouse gas and other things being equal, adding the gas to the atmosphere by burning coal, oil, and natural gas will modestly increase the surface temperature of the earth. Other things being equal, doubling the CO2 concentration, from our current 390 ppm to 780 ppm will directly cause about 1 degree Celsius in warming. At the current rate of CO2 increase in the atmosphere—about 2 ppm per year—it would take about 195 years to achieve this doubling. The combination of a slightly warmer earth and more CO2 will greatly increase the production of food, wood, fiber, and other products by green plants, so the increase will be good for the planet, and will easily outweigh any negative effects. Supposed calamities like the accelerated rise of sea level, ocean acidification, more extreme climate, tropical diseases near the poles, and so on are greatly exaggerated.

“’Mitigation’ and control efforts that have been proposed will enrich a favored few with good political ties—at the expense of the great majority of mankind, including especially the poor and the citizens of developing nations. These efforts will make almost no change in earth’s temperature. Spain’s recent experiment with green energy destroyed several pre-existing jobs for every green job it created, and it nearly brought the country to bankruptcy.

“The frightening warnings that alarmists offer about the effects of doubling CO2 are based on computer models that assume that the direct warming effect of CO2 is multiplied by a large “feedback factor” from CO2-induced changes in water vapor and clouds, which supposedly contribute much more to the greenhouse warming of the earth than CO2. But there is observational evidence that the feedback factor is small and may even be negative. The models are not in good agreement with observations—even if they appear to fit the temperature rise over the last 150 years very well.

“Indeed, the computer programs that produce climate change models have been “tuned” to get the desired answer. The values of various parameters like clouds and the concentrations of anthropogenic aerosols are adjusted to get the best fit to observations. And—perhaps partly because of that—they have been unsuccessful in predicting future climate, even over periods as short as fifteen years. In fact, the real values of most parameters, and the physics of how they affect the earth’s climate, are in most cases only roughly known, too roughly to supply accurate enough data for computer predictions. In my judgment, and in that of many other scientists familiar with the issues, the main problem with models has been their treatment of clouds, changes of which probably have a much bigger effect on the temperature of the earth than changing levels of CO2.”

Boldface added: To this, TWTW would add that the “desired answer” may still be the wrong answer, because the IPCC, etc. use surface temperatures, while the greenhouse effect occurs in the atmosphere, which is the appropriate place to measure it.

“What, besides the bias toward a particular result, is wrong with the science? Scientific progress proceeds by the interplay of theory and observation. Theory explains observations and makes predictions about what will be observed in the future. Observations anchor our understanding and weed out the theories that don’t work. This has been the scientific method for more than three hundred years. Recently, the advent of the computer has made possible another branch of inquiry: computer simulation models. Properly used, computer models can enhance and speed up scientific progress. But they are not meant to replace theory and observation and to serve as an authority of their own. We know they fail in economics. All of the proposed controls that would have such a significant impact on the world’s economic future are based on computer models that are so complex and chaotic that many runs are needed before we can get an “average” answer. Yet the models have failed the simple scientific test of prediction. We don’t even have a theory for how accurate the models should be.”

To this, TWTW would add that we cannot know how accurate the models should be until we understand natural variation. See links under Challenging the Orthodoxy and Change in US Administrations.

****************

It’s Not Nature, It’s Puccini: After a dose of climate realism, one may consider the second paper published by Nature claiming that increasing CO2 will lead to cloudless days and extreme warming, “Possible climate transitions from breakup of stratocumulus decks under greenhouse warming.” The paper led to a number of alarmist articles predicting a disastrous tipping point into a world without clouds within a few years.

As Roy Spencer discusses, such alarm is thoughtless generalization. The clouds are being generated from an upwelling of cold water from the deep oceans by an undersea current reaching a land barrier, such as Peru on the West Coast of South America. The currents were set into motion over a thousand years ago, and it is doubtful they will change from a slightly warming atmospheric effect from CO2. See links under Challenging the Orthodoxy and Defending the Orthodoxy.

****************

The Greenhouse Effect: The following is the second installment in a series on the greenhouse effect as it is being measured in the atmosphere. As discussed last week, the A-Train, and a similar, lower orbiting, C-Train, of multiple satellites from the US, France, and Japan collect a wide variety of data, including visible, infrared and microwave energy, phases of water, studies of vegetation, atmospheric pollutants, greenhouse gases, aerosols, clouds, water levels on land areas, snow depths, etc.

These data are very valuable to understanding the effects of greenhouse gases as they accumulate in the atmosphere, particularly CO2. Of primary concern is how greenhouse gases interfere with the flow of infrared energy from the surface to outer space, as measured from the top of the atmosphere. A marked decline of infrared energy from the surface to space is would be of concern, because it would indicate that an increase in greenhouse gas may be causing a warming of the globe.

This discussion will focus on the two primary entities collecting these data and putting them in suitable form for researchers that can be downloaded onto personal computers. As described on their web sites, the databases are

“The MODTRAN® (MODerate resolution atmospheric TRANsmission) computer code is used worldwide by research scientists in government agencies, commercial organizations, and educational institutions for the prediction and analysis of optical measurements through the atmosphere. MODTRAN was developed and continues to be maintained through a longstanding collaboration between Spectral Sciences, Inc. (SSI) and the Air Force Research Laboratory (AFRL). The code is embedded in many operational and research sensor and data processing systems, particularly those involving the removal of atmospheric effects, commonly referred to as atmospheric correction, in remotely sensed multi- and hyperspectral imaging (MSI and HSI).”

The other database is HITRAN:

“HITRAN is an acronym for high-resolution transmission molecular absorption database. HITRAN is a compilation of spectroscopic parameters that a variety of computer codes use to predict and simulate the transmission and emission of light in the atmosphere.

“The goal of HITRAN is to have a self-consistent set of parameters. However, at the same time the requirement is to archive the most accurate parameters possible. It must be emphasized that the parameters that exist in HITRAN are a mixture of calculated and experimental. Often the experimentally determined values are more accurate than the calculated ones, and vice versa. The calculated values have certain advantages, for example providing a more complete set. But the experimental ones still are usually more accurate. HITRAN provides the sources for the key parameters within each transition record whereby the user can determine from where the value came.

“The experimental data that enter HITRAN often come from the results of analysis of Fourier transform spectrometer laboratory experiments. Many other experimental data also are inputted, including lab results from tunable-diode lasers, cavity-ring down spectroscopy, heterodyne lasers, etc. The results usually go through elaborate fitting procedures. The theoretical inputs include standard solutions of Hamiltonians, ab initio calculations, and semi-empirical fits.”

These databases do not describe the atmosphere. But they can be used to determine whether estimates of warming of the atmosphere from climate models are reasonable, given a particular level of water vapor, CO2, methane, nitrous oxide and ozone at a specific latitude.

The next several TWTWs will discuss what is occurring in the atmosphere, a dynamic fluid in chaotic motion, as best described using modern instruments. See links under Questioning the Orthodoxy and Measurement Issues – Atmosphere.

*****************

56 Million Years Ago: Another effort is being made to draw a parallel between what happened 56 million years ago and what may happen now with increasing CO2. However, we do not have a good explanation of what caused the sudden cooling and warming during the Younger Dryas, when the planet cooled then abruptly warmed. which happened about 13,000 to 12,000 years ago, resulting in a shift in temperatures of about 10 degrees C, 18 F in the Northern Hemisphere, if not the Southern. Yet, the land masses were approximately the same as today, though sea levels were far lower, permitting land bridges. The ocean circulations were probably somewhat similar to today.

During the Paleocene-Eocene thermal maximum (PETM), claimed in an article, about 56 million years ago, there was no Drake Passage separating South America from Antarctica, making an Antarctic Circumpolar or Antarctic Subpolar circulation unlikely. Between North and South America, the Caribbean Seaway existed permitting water to flow between the Atlantic and Pacific in the tropics. The ocean circulation must have been dramatically different than today. As discussed by Happer, the oceans play a far more important role in determining temperatures and climate than CO2. Trying to equate estimated temperatures of the Paleocene-Eocene period with today based on atmospheric CO2 content is absurd. As Happer states, there is little correlation between CO2 and temperatures during the current Holocene Period. See links under Defending the Orthodoxy.

*****************

Number of the Week: 99.99997% Certainty: Roy Spencer estimates the five-sigma certainty expressed by Santer, et al. works out to be 99.99997% Certainty. And we are expected to take this type of climate science seriously? See links under Challenging the Orthodoxy.

*****************

ARTICLES:

1. Bad Science May Banish Paper Receipts

California lawmakers seek a ban, based on a scare over BPA that was debunked two decades ago.

By Steve Milloy, WSJ, Feb 25, 2019

https://www.wsj.com/articles/bad-science-may-banish-paper-receipts-11551137837

SUMMARY: The publisher or JunkScience.com writes:

Having vanquished plastic straws, the California Legislature is now considering a bill to ban paper cash-register receipts. One reason offered for the ban is to reduce carbon-dioxide emissions. The other is to reduce public exposure to bisphenol A, or BPA, a chemical used to coat receipts.”

Citing the number of coal-fired power plants being planned, the author dismisses any possible benefit from reducing CO2. He continues:

“The more interesting reason for the ban is the BPA argument, which is part of a broader trend of misuse of science in public policy. The alarm behind the California bill arises from the notion that BPA is an ‘endocrine disrupter’: a chemical that, even at low doses, can disrupt human hormonal systems. Such disruptions theoretically could cause a variety of ailments, from cancer to reproductive problems to attention-deficit disorder.

“Like the panic over DDT that followed the 1962 publication of Rachel Carson’s ‘Silent Spring,’ the endocrine-disrupter scare made its public debut with a book, ‘Our Stolen Future’ (1996). Written by three activist authors and including a foreword by Al Gore, the book lays out a case for regulating various pollutants.

“‘Our Stolen Future’ was followed the same year by a highly publicized Tulane University study that reported certain combinations of pesticides and other chemicals in the environment were much more potent endocrine disrupters than the individual chemicals themselves. Within weeks, this study prompted Congress to pass a bill directing the Environmental Protection Agency to develop a program to test chemicals for their potential harm to hormonal systems.

“In the months that followed, the Tulane study began to fall apart. Independent laboratories around the world reported that they could not replicate its results. By July 1997, the original study was retracted. Federal investigators concluded in 2001 that the Tulane researchers had committed scientific misconduct by falsifying their results.

“Yet the law and regulatory programs spawned by the false study remained in place. The endocrine-disrupter scare gained steam through the 2000s, and BPA became its biggest villain. Generous federal funding led to the publication of hundreds of BPA studies. A movement to ban BPA was joined by several cities, states such as California, and foreign nations including Canada, resulting in the elimination of the substance from plastic bottles in those regions. Regulators at the Food and Drug Administration and the European Food Safety Authority pushed back against the scare, to little avail.

“Finally in 2012, the FDA decided to launch Clarity, a large $8 million study of BPA to be conducted according to regulatory guidelines known as the Good Laboratory Practices standard. Researchers, including those who had published studies claiming that low-dose exposures to BPA posed health risks, were provided with coded, pre-dosed animals to avoid bias and cheating. Researchers were required to upload their raw data to a government database before the identity of each dose group was disclosed to them.

“The results of Clarity were published in 2018. The FDA concluded that the study failed to demonstrate adverse health effects from exposure to BPA in low doses—like the amount one might be exposed to by handling a paper receipt.

“Yet despite its birth in scientific misconduct, its dismissals along the way by international regulators and science and public-health groups like the National Academy of Sciences and the World Health Organization, and finally its debunking by the FDA’s Clarity study, the BPA scare survives. Thanks to Congress, it lives on at the EPA, where a 22-year-old endocrine-disrupter screening program peddles merrily along despite producing no results of interest.

“It is a sad state of affairs when actual science cannot vanquish adjudicated science fraud in public policy.”

********************

2. Too Much Academic Science Is Bad Science

Steve Milloy recounts that bad science and sequelae of the Tulane University report in Science of hormone disrupters. Multiple labs couldn’t replicate the finding.

By S. Stanley Young, Letters, WSJ, Mar 2, 2019

https://www.wsj.com/articles/too-much-academic-science-is-bad-science-11551469383

A health researcher and statistician writes of his efforts to find replication of the key study in the above article:

“Steve Milloy recounts the bad science and sequela of the Tulane University report in Science magazine of hormone disrupting chemicals in many paper receipts. Multiple labs couldn’t replicate the finding (“Bad Science May Banish Paper Receipts,” op-ed, Feb. 26). At the time, I wrote to the Tulane researchers asking for the data. No go. I asked those who funded the research. Again, no go. The experiment consisted of looking at pairs of compounds. With many pairs and experimental variability, the most extreme result looked real and was reported.

“Each of the universities tested only the reported positive pair, and their efforts failed. Science asked the Tulane researchers to respond. The principle investigator (PI) asked his assistant to replicate the work; the replication failed. The PI put the blame on the (innocent) assistant. The PI had rushed to publication without internally replicating the work. Had the entire experiment been replicated, there would have been an extreme pair, but it would have been a different pair. The analysis of the data wasn’t adjusted for asking multiple questions. The PI wouldn’t make the data public. All these flaws—no internal replication, not adjusting analysis for multiple questions, failing to make data available, are still rife throughout science, so it should be no surprise that well over half of the claims made in science papers fail to replicate.”

CONTINUE READING –>

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s