By Ken Haapala, President, The Science and Environmental Policy Project
Brought to You by www.SEPP.org
California Litigation, Monckton: Last week’s TWTW discussed on the public nuisance lawsuits by San Francisco and Oakland against oil companies claiming carbon dioxide (CO2)-caused global warming / climate change will cause harm in the future. It focused on the filing amicus curiae (friend of the court) brief by three distinguished Professors of Physics – William Happer, Steven Koonin and Richard Lindzen (Three Profs). The brief accepted the data and evidence used by the UN Intergovernmental Panel on Climate Change (IPCC) in its Fifth Assessment Report (AR5) and the Climate Science Special Report (CSSR) by the US Global Change Research Program (USGCRP). However, the Three Profs demonstrate the conclusions in the reports are not established, and, at best, premature. They assert:
“Our overview of climate science is framed through four statements:
“1. The climate is always changing; changes like those of the past half-century are common in the geologic record, driven by powerful natural phenomena
“2. Human influences on the climate are a small (1%) perturbation to natural energy flows
“3. It is not possible to tell how much of the modest recent warming can be ascribed to human influences
“4. There have been no detrimental changes observed in the most salient climate variables and today’s projections of future changes are highly uncertain.”
Another important assertion is that the procedures (methodology) used by the IPCC and the USGCRP do not yield a unique solution, but an infinite set of solutions. Despite ever more imaginative claims by alarmists declaring CO2 is causing harm, hard evidence is lacking. Claims without evidence by government entities is a common characteristic of what TWTW terms bureaucratic science.
This week’s TWTW will focus on the amicus curiae brief by Christopher Monckton, et al. Then it will briefly discuss the objection raised by Roy Spencer. The importance of Spencer’s objection is not that it refutes the work of Monckton, et al. but it illustrates the morass created by the IPCC, which the IPCC calls science.
Similar to the brief by the Three Profs, Monckton’s amicus curiae brief has the advantage of using the language and concepts of the IPCC and USGCRP, the climate establishment. The brief illustrates a severe error in calculations. If Monckton is correct, the fear of carbon dioxide causing dire global warming is without any theoretical support. Before submission, his work was reviewed by two members of the SEPP Board of Directors, Chairman Tom Sheahen and Willie Soon, as well as others versed in the mathematics and concepts traditionally used by the IPCC.
Monckton’s brief goes to the 1979 Charney Report, published by the National Academy of Sciences, which speculated, without evidence, that the slight warming caused by CO2 will be amplified by a more powerful warming caused by increased water vapor. The warming from CO2 is established by decades of experimentation in many laboratories. The amplification, “positive feedback,” from water vapor has not been found. The concept of positive feedback was introduced in the design of electronics, particularly amplifiers, to carefully and dramatically increase a weak electronic signal. For decades, the concept has been discussed in social sciences, with little precision as to meaning or amplification.
The Charney Report speculated that with a positive feedback from water vapor, a doubling of CO2 would eventually cause an increase in temperatures of 3ºC plus or minus 1.5 ºC. Except for an increase in the lower bound in its 2007 report, the IPCC has continued this speculated estimate for over 25 years, since its first report in 1990. In its influential Second Assessment Report (SAR or AR2, 1996) the IPCC declared the amplified warming was a “hot spot” centered over the tropics at about 33,000 feet (10 km) and incorrectly termed it the “distinct human fingerprint.” The IPCC continued the “hot-spot” to the Fourth Assessment Report (AR4, 2007), which was very influential for the UK to adopt its very expensive energy policy under its Climate Change Act, 2008. Almost forty years of atmospheric temperature data from satellites covering the surface to 50,000 feet (15 km), fail to reveal the “hot spot.” The concept should be relegated to one of the many, very expensive IPCC myths.
Monckton takes the concept embodied in the Charney Report, uses a classic feedback model from electrical circuitry, then calculates what would be the expected warming from a doubling of CO2, if the feedback concept were correctly used. Monckton, et al. calculate a value of 1.2 ºC – less than one half of that used by the IPCC and the Climate Establishment. Monckton finds a serious error in the calculation of a positive feedback as stated in the Charney Report. It should be noted that the Charney Report did not contain the calculations from the modelers on which its speculative findings were based. Also, at the time, the method for calculating global temperatures from atmospheric data had not been established. See Links under Challenging the Orthodoxy and Litigation Issues – California.
Two Languages, Two Sciences? In criticizing the approach used by Monckton, Spencer states that the approach used is no longer used by modelers. In introducing Monckton’s rebuttal, Spencer writes:
“In fairness to Lord Monckton, I have accepted his request to respond to my post where I criticized his claim than an “elementary error of physics” could be demonstrated on the part of climate modelers. While Christopher & I are in agreement that the models produce too much warming, we disagree on the reasons why. From what I can tell, Christopher claims that climatologists have assumed the theoretical 255K average global surface temperature in the absence of the greenhouse effect would actually induce a feedback response; I disagree… 255K is the theoretical, global average temperature of the Earth without greenhouse gases but assuming the same solar insolation and albedo. It has no feedback response because it is a pure radiative equilibrium calculation. Besides, the climate models do not depend upon that theoretical construct anyway; it has little practical value — and virtually no quantitative value –other than in conceptual discussions (how could one have clouds without water vapor? How could a much colder Earth have no more ice cover than today?). But I will let the reader decide whether his arguments have merit. I do think the common assumption that the climate system was in equilibrium in the mid-1800s is a dubious one, and I wish we could attack that, instead, because if some of the warming since the 1800s was natural (which I believe is likely) it would reduce estimates of climate sensitivity to increasing carbon dioxide even further.”
The issue raised illustrates that there may be a significant divergence in the IPCC Climate Establishment between the concepts used by the modelers and the concepts used by the management. The management issues the assessment reports that claim a great deal of certainty in their work, regardless of how ill-founded. The modelers may be far less certain. Explaining atmospheric energy flows creates difficulties for the modelers and is avoided by management.
Spencer’s dissertation advisor was a pioneer in numerical weather forecasting, Verner Suomi. [Suomi was Chairman of the Climate Research Board, which oversaw the “Ad Hoc Study Group on Carbon Dioxide and Climate”, the Charney Report.] Based on his background, Spencer is probably more attuned to the problems of numerical weather forecasting and of using weather models to make long-range climate forecasts than most establishment climate scientists. Perhaps, explaining why Spencer left the IPCC.
One of the subordinate issues involved in Spencer-Monckton discourse is the search for ZOD – Zero-order model of feedbacks in the climate. A time in which the earth’s climate was stable. The ‘Three Profs” called this the starting point. Given the internal turbulence in the climate system, there is no logical reason to assume ZOD exists – a time in which there were no feedbacks in the climate system. However, given that decades of IPCC reports assert that ZOD existed in the mid-19th century, it is appropriate to assess how well-established it is, and how well it is described. This Monckton did, and found that, given ZOD, the warming power of CO2 was expanded using faulty logic. It is less than one-half of that used by the IPCC.
Please note that in closing his foreword to the Charney Report, Suomi writes:
“In cooperation with other units of the National Research Council, the Climate Research Board expects to continue review and assessment of this important issue in order to clarify further the scientific questions involved and the range of uncertainty in the principal conclusions. We hope that this preliminary report covering but one aspect of this many-faceted issue will prove to be a constructive contribution to the formulation of national and international policies.” [Boldface added.]
It is past time for the National Research Council and the National Academies to review the issue using comprehensive atmospheric temperature data that did not exist when the report was written. Unfortunately, they fail to do so and repeat mistakes of the past.
Next week, TWTW will discuss the testimony by Myles Allen, the expert witness for the cities of San Francisco and Oakland, and a major player in the IPCC. See links under Challenging the Orthodoxy, Defending the Orthodoxy, and http://www.ecd.bnl.gov/steve/charney_report1979.pdf
State of the Climate: The Global Warming Policy Foundation published a report by Ole Humlum, a former Professor of Physical Geography at the University Centre in Svalbard, Norway, and Emeritus Professor of Physical Geography, University of Oslo, Norway. His views are particularly interesting because Svalbard is an isolated island between Norway and the North Pole, and one of the world’s northernmost inhabited areas, lying above the Arctic Circle, at a latitude between 74º and 81º North (Though not precisely fixed, the Arctic Circle runs about 66° North.)
Over the last decade or so we have been bombarded by claims of unprecedented Arctic warming. Among his findings; Humlum states:
“Arctic and Antarctic sea ice extent since 1979 have exhibited opposite trends, decreasing and increasing, respectively. Superimposed on these overall trends, however, variations of shorter duration are also important to understand year-to-year variations. In the Arctic, a 5.3-year periodic variation is important, while for the Antarctic a variation of about 4.5 years’ duration is important. Both of these variations reached their minima simultaneously in 2016, which explains the minimum in global sea ice extent. A shift towards larger ice extents in both hemispheres may have begun in 2017 (Figure 34), as predicted in ‘State of the Climate 2016’. vii
“The Northern Hemisphere snow cover extent has undergone important local and regional variations from year to year. The overall tendency since 1972, however, is towards overall stable snow extent conditions. “
Also, Humlum finds that the recent 2016 & 2017 warming was driven by an El Niño event and states:
“The recent 2015–16 oceanographic El Niño episode is among the strongest since the beginning of the record in 1950. Considering the entire record, however, recent variations between El Niño and La Niña episodes are not unusual.”
In addition, he states:
“There still appears to be a systematic difference between average global air temperatures estimated by surface stations and by satellites. Especially since 2003, the average global temperature estimate derived from surface stations has steadily drifted away from the satellite-based estimate in a warm direction.”
See link under Challenging the Orthodoxy.
Ultimate Bureaucratic Science: The IPCC and the USGCRP ignore atmospheric temperature data, taken where the greenhouse effect occurs, in favor of surface data, which has many more influences unrelated to greenhouse gases. This deliberate ignorance is too typical of government bureaucracies that place political policy over rigorous science. The EPA has taken bureaucratic science to an even greater level, by issuing regulations without any scientific justification, just claiming that the scientific justification exists but the bureaucracy cannot reveal it. This is like playing a shell game of finding a pea covered by a shell, but without the pea. It is perfect for bureaucrats who do not wish to be criticized for using shoddy science to establish regulations.
Writing in the Wall Street Journal, Steve Milloy of Junk Science.com, exposes this shell game, calling it “secret science.” The claim that fine particulate matter less than 2.5 microns, PM2.5 (soot), damages human health, began with the Clinton administration in the 1990s, was largely ignored by the Bush administration, then became important to Mr. Obama’s Power Plan. Current EPA Administrator, Scott Pruitt, has declared “secret science” will not be used in EPA regulations.
Of course, former EPA Administrator Gina McCarthy, under Mr. Obama, objects claiming: “This approach would undermine the nation’s scientific credibility…” Scientific credibility is built on secrecy? Transparency and reproducibility are contrary to rigorous science?
Others object to Mr. Milloy’s analysis, it is not conclusive or too limited. However, all they must do is produce more rigorous science showing that PM2.5 damages human health, which EPA claimed it had. Complicating matters, the International Agency for Research on Cancer (IARC) and the World Health Organization (WHO) have declared PM2.5 to be deadly human carcinogens categorized as: “Group 1: carcinogenic to humans: There is enough evidence to conclude that it can cause cancer in humans.”
Thus far, there appears to be no studies independent of EPA secret science to make a conclusion that PM2.5 causes cancer in humans. Is this what former EPA officials call “undermining the nation’s scientific credibility”? It will be interesting to see how this issue develops. See Article # 2 and links under Change in US Administrations.
Does Trump Care? Roger Pielke Jr. who has been attacked by members of Congress for questioning false claims of intensifying extreme weather events, suggests that President Trump does not care about the integrity of science. Pielke argues that the failure to appoint a director of the White House Office of Science and Technology Policy (OSTP), which has about 50 staffers, is an example of Trump’s lack of concern.
This may be true. Until there is an appropriate director of OSTP, the USGCRP will continue to be bureaucratic and churn out reports based on inadequately tested models, using inappropriate data, as it recently did with an interim report. The thirteen government entities with representatives in the USGCRP will continue this bureaucratic science. For example, the Department of Agriculture web site ignores the great benefits of enhanced carbon dioxide has on greening of the earth, and increased crop productivity as demonstrated by Landsat images and numerous studies discussed by CO2 Science. It is as if the Department of Agriculture does not understand the science of photosynthesis.
If Mr. Trump wishes to drain the scientific swamp, he needs to start with the OSTP. See links under Change in US Administrations and Review of Recent Scientific Articles by CO2 Science.
Number of the Week: Two weeks by 2025. The European Centre for Medium-Range Weather Forecasts (ECMWF), perhaps the finest in forecasting of all groups using numerical weather models, states on its web site:
“Skill in medium-range weather predictions in 2016, on average, extends to about one week ahead. By 2025 the goal is to make skillful ensemble predictions of high-impact weather up to two weeks ahead.”
In a process called re-analysis, the ECMWF constantly tests its predictions and models against actual atmospheric observations, gathered daily. Yet, the IPCC and the USGCRP claim their models using similar techniques can predict / forecast / project future climate 100 years hence – without testing the models against actual atmospheric observations. Pure Bureaucratic Science. See link under Changing Weather.
1. The EPA Cleans Up Its Science
Now Congress should act to lock in place data transparency.
By Steve Milloy, WSJ, Mar 26, 2018
SUMMARY: In discussing the development of “secret science” by the EPA, the author of “Scare Pollution: Why and How to Fix the EPA” writes:
“The Environmental Protection Agency will no longer rely on ‘secret’ scientific data to justify regulations, Administrator Scott Pruitt announced last week. EPA regulators and agency-funded researchers have become accustomed to producing unaccountable, dodgy science to advance a political agenda.
“The saga began in the early 1990s, when the EPA sought to regulate fine particulate matter known as PM2.5—dust and soot smaller than 2.5 microns in diameter. PM2.5 was not known to cause death, but by 1994 EPA-supported scientists had developed two lines of research purporting to show that it did. When the studies were run past the EPA’s Clean Air Science Advisory Committee, it balked. It believed the studies relied on dubious statistical analysis and asked for the underlying data. The EPA ignored the request.
“As the EPA prepared to issue its proposal for PM2.5 regulation in 1996, Congress stepped in. Rep. Thomas Bliley, chairman of the House Commerce Committee, sent a sharply written letter to Administrator Carol Browner asking for the data underlying studies. Ms. Browner delegated the response to a subordinate, who told Mr. Bliley the EPA saw ‘no useful purpose ‘ in obtaining the data. Congress responded by inserting a provision in a 1998 bill requiring that data used to support federal regulation must be made available to the public via the Freedom of Information Act. But it was hastily written, and a federal appellate court held the law unenforceable in 2003.
“The controversy went dormant until 2011, when a newly Republican Congress took exception to the Obama EPA’s anti-coal rules, which relied on the same PM2.5 studies. Again, the EPA was defiant. Administrator Gina McCarthy refused requests for the data sets and defied a congressional subpoena.
“Bills to resolve the problem died in the Senate. Democrats argued that requiring data for study replication is a threat to intellectual property and an invasion of medical privacy. In fact, the legislation would protect property by requiring a confidentiality agreement, and no personal medical data or information would have been released.”
Mr. Milloy describes how he and his colleagues used California “Death Public Use Files” to determine PM2.5 was not associated with deaths in California.
“The best part is that if you don’t believe the result, you can get the same data for yourself from California and run your own analysis. Then we’ll compare, contrast and debate. That’s how science is supposed to work.
“It would be better if Congress would pass a law requiring data transparency. A future administrator may backslide on the steps Mr. Pruitt is taking. In the meantime, we have science in the sunshine.”
2. Attack of the Killer Cappuccino
Californians can look forward to cancer warnings on coffee.
Editorial, WSJ, Mar 30, 2018
SUMMARY: Underlining the importance of requiring science to be transparent, if it is used for public policy. The burden of proof must be on those who claim harm. the editorial states:
Californians will soon get something besides milk and cinnamon with their coffee—a mandated warning that their morning pick-me-up may kill them. A Los Angeles Superior Court ruled Thursday against Starbucks and other cafes and gas stations, penalizing them because they couldn’t definitively prove that coffee doesn’t cause cancer. In addition to slapping a cancer warning on each cup of joe, the companies may have to pay millions in civil penalties and lawsuit settlements.
At issue is a chemical called acrylamide, which is created when coffee beans are roasted. It’s also common in other foods, including bread, cookies, cereals, potato chips and French fries. But under California’s 1986 Safe Drinking Water and Toxic Enforcement Act, better known as Proposition 65, acrylamide is listed as a likely human carcinogen.
California’s cancer list relies heavily on junk science, and with acrylamide the evidence is questionable at best. Some government agencies want more research into its carcinogenic potential, and the American Cancer Society does, too.
But the group also notes that even those tentative concerns derive mainly from studies that examined its effects on lab animals, not people. The doses of acrylamide given to rats and mice “have been as much as 1,000 to 10,000 times higher than the levels people might be exposed to in foods,” the American Cancer Society says. The group adds that for humans ‘there are currently no cancer types for which there is a clearly increased risk related to acrylamide.’
The evidence is so scant that even the World Health Organization’s International Agency for Research on Cancer admitted in 2016 that there was ‘no conclusive evidence for a carcinogenic effect of drinking coffee.’ Its review of more than 1,000 studies turned up evidence that coffee may reduce the risk of some types of cancer. This is the same alarmist outfit that thinks everything from red meat to working the night shift causes cancer.
But however feeble the evidence, Prop. 65 encourages trial lawyers and their front groups to sue on behalf of the state by offering them a cut of the civil penalties. Last year Prop. 65 cases yielded $25.6 million in settlements, and more than three-fourths of that sum went to the lawyers. Trial lawyer Raphael Metzger brought the case against Starbucks and 90 other cafes and gas stations, working on behalf of something called the Council for Education and Research on Toxics. The same “nonprofit” and attorney also sued McDonald’s and Burger King over acrylamide in 2002.
After discussing possible penalties, the editorial concludes:
The case is further proof that Prop. 65 is a lot like a cup of coffee: Even if it doesn’t kill you, it can keep you up at night. Or maybe it’s further proof that California progressives are nuts.