The Week That Was: August 24, 2019, Brought to You by www.SEPP.org
By Ken Haapala, President, Science and Environmental Policy Project
August Fad II – Amazon Burning: Some organizations considered scientific are feeding the popular press with highly dubious claims. Last week, TWTW discussed NOAA’s claim that July 2019 “was the hottest month measured on Earth since records began in 1880.” The record is highly questionable. Also, other commentators made very appropriate remarks and Tony Heller demonstrated how the process of homogenization distorts the surface record. Also, the comprehensive atmospheric record demonstrates that NOAA’s claims are unfounded. NOAA is losing scientific credibility by making such claims.
This week, the press fad is the burning of the Amazon. For example, the newspaper The Hill ran articles stating:
“As wildfires continue to engulf the Amazon rainforest at a record pace…”
“Ahead of the Group of Seven summit in France, French President Emmanuel Macron called the situation in Brazil an ‘international crisis’ and vowed to make it a priority point of discussion.”
NASA’s Earth Observatory has an animation of fires on earth from March 2000 to June 2019. It is a cartoon depicting where fires are occurring. As NASA states:
“The fire maps show the locations of actively burning fires around the world on a monthly basis, based on observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite.”
Jesse Ferrell of AccuWeather stated the situation clearly:
“The Earth is burning, but it always has been.
“Thousands of fires are continually burning across the Earth every day of every year, and they always have. They have an animation of the last 20 years of this data. You won’t see much of a change, except that fires are worse in some areas and better in others from year to year.”
“You can’t actually “see” the fires or the damage they do, looking at the globe from space.
“So why does it look like the whole Earth is on fire in the maps above? That’s a great question, and it leads to exaggeration of the true size of what’s burning. As I understand it, this data is composed of “warm pixels.” This means the satellite has detected what it thinks could be a wildfire based on its temperature, and I assume, other algorithms. NASA warns on their website:
“’Don’t be fooled by sizes of some of the bright splotches on these maps. The colors represent a count of the number of fires observed within a 1,000-square-kilometer area.’
“This means you’re not looking at actual fires, you’re looking at pixels which represent the number of satellite-detected fires within 1,000 square-kilometers of a location, and that data can be represented with smaller pixels, but if you zoom out to show the whole Earth, as above, you wouldn’t be able to see them. It’s analogous to looking at a map of NWS Storm Spotters, where each car is the size of Rhode Island. When you zoom way out, it will look like cities are clogged with cars, but the reality is that a car is a lot smaller than Rhode Island, and when you get down to that level, they are very far apart.”
Apparently, many journalists have difficulty understanding the difference between animation and reality. See links under Fear of Fire and Communicating Better to the Public – Make things up.
The Greenhouse Effect – NOAA Models: The influential 1979 Charney Report published by the National Academy of Sciences stated that a doubling of carbon dioxide (CO2) would result in a net increase in temperatures of 3 ºC plus or minus 1.5 ºC (about 6 ºF plus or minus 3 ºF) This figure has been retained for 40 years through five major assessment reports and many minor reports by the UN Intergovernmental Panel on Climate Change (IPCC) and four National Assessments by what is now called US Global Change Research Program (USGCRP). Yet there has been no rigorous testing of these findings by these government groups against independent data.
The Charney Report did not test its conclusions against independent data, instead it relied on models. It states:
The independent studies of the CO2 /climate problem that we have examined range from calculations with simple radiative-convective models to zonally and vertically averaged heat-balance models with horizontally diffusive heat exchange and snow-ice albedo feedbacks to full-fledged three-dimensional general circulation models (GCM’s) involving most of the relevant physical processes. Our confidence in our conclusion that a doubling of CO2 will eventually result in significant temperature increases and other climate changes is based on the fact that the results of the radiative-convective and heat-balance model studies can be understood in purely physical terms and are verified by the more complex GCM’s. The last give more information on geographical variations in heating, precipitation, and snow and ice cover, but they agree reasonably well with the simpler models on the magnitudes of the overall heating effects. [Boldface Added.]
It is understandable why the models were not tested, because the data were not available. Although satellites began collecting comprehensive atmospheric data that could be used to calculate temperature trends in December 1978, the procedures to make such calculations were not published until 1990, and some slight adjustments were required to adjust for orbital decay and drift.
The Charney Report referenced major models used primarily to reach its conclusions: 1) the model developed by the NOAA Geophysical Fluid Dynamics Laboratory at Princeton; and 2) the NASA-GISS model. The NOAA GFDL model is briefly discussed today. Next week, TWTW will discuss another model supported by the National Academy of Sciences.
Based on a quick review of the NOAA-GFDL model and examination of some details, it appears that the primary effort over the past 40 years has been to increase the resolution of the models but not to test the models against the expanding data bases from real world observations. According to NOAA-GFDL grid resolution has been largely determined by computer power. Increased computing power allows simulation of smaller scale physical processes, details, and animations from the models without the need for observations from satellites, etc.
The models are divided into four components: atmosphere, land surface, ocean, and sea ice. For the ocean grid, in the 1980s, at the equator the east-west grid had 96 divisions and the north-south gird had 40 divisions, yielding a total of 3840 grid boxes. In the 2009 version, at the equator the east-west grid had 1440 divisions and the north-south gird had 1070 divisions, yielding a total of 1,540,800 grid boxes.
The new atmospheric model CM 2.5 has grid box cells that are approximately 50 km on each side, with 32 vertical levels. The surface grid size is approximately the surface grid size of the ocean component model. Thus, for the ocean and atmospheric components there are 49,305,600 grid boxes. The NOAA-GFDL website discusses the difficulty of making all the calculations needed:
“The model uses increasing levels of parallelism to run efficiently on modern supercomputers. For example, the model was able to simulate 12 model years per day when using approximately 6000 processors on the NOAA Research Supercomputer (GAEA) located at the Oak Ridge National Laboratory.”
But it is not the computing power used that is the issue. A more important issue is the source of the data. The only possible source is from satellite measurements, yet as John Christy and his colleagues have shown, the US global climate models greatly overestimate the warming of the atmosphere.
One of the statements under Climate Modeling addresses the issue of their use:
“Why Do We Believe Them? [The models]
“Although there is some level of disagreement among climate models, these models are based on well-founded physical principles either directly for simulated processes or indirectly for parameterized processes. The results of one experiment are extensively checked by a large community of modelers and researchers around the world (for example, as part of the IPCC), which reduces uncertainty. Generally, models produce simulations of current and past large-scale climates that agree with observations. Climate models have also produced an accurate hindcast of 20th century climate change, including increased warming partly due to CO2 emissions. This gives us confidence in using these models to project future climate change.” [Boldface added.]
This leads to the key issues: the testing of the models against independent data and validation of the models. Apparently, as with the Charney Report, the models are tested only against other models, and validation has not been accomplished. The greenhouse effect occurs in the atmosphere. Until the models have been shown to predict atmospheric temperature trends, the science is far from settled.
Also, it is interesting to note that in the depiction of the “Basic structure of GFDL’s Earth System Model” the dominant greenhouse gas, water vapor, is not mentioned. See links under Defending the Orthodoxy and Model Issues.
Revaluating the Questionable Consensus: Michael Oppenheimer and Naomi Oreskes are lead authors of a new book titled “Discerning Experts: The Practices of Scientific Assessment For Environmental Policy.” The book blurb states:
“Discerning Experts assesses the assessments that many governments rely on to help guide environmental policy and action. Through their close look at environmental assessments involving acid rain, ozone depletion, and sea level rise, the authors explore how experts deliberate and decide on the scientific facts about problems like climate change. They also seek to understand how the scientists involved make the judgments they do, how the organization and management of assessment activities affects those judgments, and how expertise is identified and constructed.
“Discerning Experts uncovers factors that can generate systematic bias and error and recommends how the process can be improved. As the first study of the internal workings of large environmental assessments, this book reveals their strengths and weaknesses, and explains what assessments can—and cannot—be expected to contribute to public policy and the common good.” [Boldface added]
Michael Oppenheimer is closely associated with the NOAA-GFDL model discussed above. But it is doubtful one can discover in the book why US global climate models greatly overestimates the temperature trends of the atmosphere. Oreskes made severe claims against four distinguished scientists, including Fred Singer and Fredrick Seitz of SEPP, without providing evidence substantiating the claims. Probably, the book is short on hard evidence. Given these shortcomings, it is doubtful the Oppenheimer and Oreskes are experts on the greenhouse effect.
Judith Curry politely summed the issue:
“In seeking to defend ‘it’s worse than we thought’ about climate change, Oppenheimer, Oreskes et al. have opened up a welcome can of worms. Consensus seeking and consensus enforcement have trivialized and politicized climate science for decades.
“It has been clear for some time that the conclusions of the IPCC Assessment Reports are too tame for the activist/alarmists. In fact, quoting the IPCC is a favored strategy of the so-called ‘contrarians’ (including myself). It remains to be seen if Oreskes can drop the 97% consensus rhetoric (I doubt it).” To Curry’s comments one can add: If “the science is settled” was true years ago, why are things always “worse than we thought”? See links under Challenging the Orthodoxy
Oh’ Mann: Michael Mann lost his litigation against Tim Ball, who recognized the importance of the detailed weather records of the Hudson Bay Company, covering much of north-central North America. Mr. Mann will be required to pay court costs, but it is unclear what that entails. Mr. Mann appears to be well financed in order to undertake the personal lawsuits he has. Mr. Ball is too much a gentleman to speak harshly. Mark Steyn, another of Mr. Mann’s targets, is not so inclined. It will be interesting to see what develops. See links under Litigation Issues.
New Web Site: Craig Idso has posted for the Institute for the Human Environment. Idso is an authority on the benefits to the environment from carbon dioxide. His weekly reviews of papers are a valuable source for dispelling the popular mischaracterization of increasing CO2. The first post for the Institute presents a clear discussion why increasing CO2 is not the danger to human health the EPA and others claim. See links under Challenging the Orthodoxy.
One New Rule: Australian Jo Nova describes a new rule for electrical grid operators in establishing priority of bids. From many examples, due to their low variable cost (cost of operation) wind and solar can be low cost providers of electricity, when they work. But they are very unreliable and cannot be depended upon. Due to their low variable costs, they can price out reliable providers, when they work.
According to Nova: Terry McCrann, writing in The Australian, Business Review, proposes if a generator wishes to sell power into the grid, it will have to guarantee a minimum level of supply and guarantee that minimum level of supply 24/7. “And critically, that minimum level can be no lower than 80 per cent of the maximum amount of energy you will be permitted to sell into the grid.” This is an idea to ponder. See link under Energy Issues – Non-US.
SEPP’S APRIL FOOLS AWARD
The voting is closed and the winner who most closely meets the qualification is being selected. No missing shards here, one hopes.
Number of the Week: 116 Stations: Assisted by Kirye of Japan, Pierre Gosselin writes:
“It is a fact that land surface temperature records going back before 1900 globally are very few and sparse. Worldwide there are only 116 stations Version 3, unadjusted datasets that go all the way back to January 1880 – most of them are in USA and Europe (northern hemisphere).”
TWTW has not done such an analysis but is not surprised. Further, Kirye and Gosselin state that there are only 12 such stations in the Southern Hemisphere, seven in Australia, one below the equator in Africa, Cape Town, and one below the equator in South America, Bahia Blanca, Argentina. These records are from a NOAA database.
Endangered Species Overreach
A new rule won’t put more fish and wildlife at risk.
Editorial, WSJ, Aug 16, 2019
SUMMARY by TWTW: The editorial begins:
“Perhaps you’ve been reading that the Trump Administration wants to make it easier to eliminate polar bears, spotted owls and other species from the face of the earth. As ever in Donald Trump ’s Washington, the reality is different, so allow us to explain.
“The uproar concerns a proposed new rule to revise some practices under the 1973 Endangered Species Act. For all the praise liberals shower on that law, it has achieved far less than advertised. A 2018 report from the Heritage Foundation’s Robert Gordon found that since 1973 the ESA has helped to recover only 40 species, and nearly half of those were mistakenly listed in the first place.
“Meanwhile, the law has become a legal weapon to strip property rights and block millions of acres from private development. Congress ought to rewrite the ESA but can’t break a partisan impasse. So this week Interior Secretary David Bernhardt tried to clarify regulation under the law to prevent abuses.
“The new rule restores Congress’s original two-tiered approach, killing the Fish and Wildlife Service’s ‘blanket rule’ that treated ‘endangered’ and ‘threatened’ species alike. This will devote scarce government dollars—and landowner attention—to the species most at risk. It will also provide states more flexibility to assist species that are struggling though not seriously endangered.
“The new rules clarify vague terms such as ‘the foreseeable future’ to mean only as far as the government can ‘reasonably determine’ a danger of extinction. This will make it harder for activists to use claims of vague future climate damage to declare many more species endangered.
“And the rules remind regulators they must use the same five criteria in deciding whether to delist a species as they did when listing one—destruction of habitat or range; overutilization; disease; inadequate regulation; or other natural or manmade factors. This will guard against special interests that move the goalposts every time a recovered population is proposed to be cleared.
“Another reform would limit the use of ‘critical habitat’ designations that tie up tens of millions of acres of U.S. land. The rules reinstate a requirement that agencies first evaluate acreage that contain the at-risk species before considering new, unoccupied areas. Agencies also must prove that unoccupied critical habitat contains ‘one or more of the physical or biological features essential to the species’ conservation.’
“The goal of all this is to return to a rules-driven, scientific approach to species management.”
The editorial discusses suits then continues;
“Many struggling species live on private land, and the cooperation of owners is crucial for recovery. Environmental laws and regulations should encourage stewardship, rather than penalize private partners. To the extent the rules improve private-public cooperation, the key deer and sage grouse will benefit. Which is supposed to be the point of the law.”