The Week That Was: April 18, 2020, Brought to You by www.SEPP.org
By Ken Haapala, President, Science and Environmental Policy Project
Quote of the Week: “It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits.” – Aristotle, Nicomachean Ethics [H/t Demetris Koutsoyiannis]
Number of the Week: 2 cents
Limits of Models: In the midst of the lock-down of much of the U.S. public and the collapsing economy; some Americans are learning a few important lessons. One, the country is a republic with a written Constitution. As President Trump realized this week, that Constitution grants the Federal government limited powers, even during a health emergency. And two, numerical models are not infallible. Indeed, almost daily, Drs. Birx and Fauci repeat on television that: “this model is only as good as the data we put into it.” Speculation, scenarios or projections, may be interesting but must be supported by evidence fitting the issue. Unfortunately, all too frequently government policy has been based on models using inappropriate data.
For example, for several decades beginning in the 1970s, Federal government energy policy was based on the fear the country was about to run out of oil and natural gas based on Federal energy models. These models were based on easily recoverable reservoirs, subsurface pools, that were on shore, and ignored the vast offshore resources such as the Gulf of Mexico and the North Slope of Alaska, and vast difficult-to-release onshore resources, especially tight shale.
Similarly, the UN and some US government entities are promoting the fear that carbon dioxide and other greenhouse gases are causing dangerous global warming by ignoring the vast evidence that greenhouse gases are not causing a dangerous warming of the atmosphere, where the greenhouse effect occurs.
As Drs. Birx and Fauci realize, solid, factually based data of high quality are needed to establish a model to make reasonable projections / forecasts. Inappropriate data should not be used. For example, at midnight GMT on April 18, worldometers reported that the death rate per million of population in the USA was 118, yet in China it was 3.
One has to be skeptical of any model projecting future deaths which was designed using the numbers from China. Correspondingly, one has to be skeptical of the dubious quality of the doctored surface data used in designing the climate models created by laboratories in the US and those largely used by the UN Intergovernmental Panel on Climate Change (IPCC).
Contrary to what is often expressed in the press, models are a tool for understanding, not an answer. Models are not universal, although sometimes certain mathematical functions may apply to many uses. For example, experiments show that the radiative “forcing” due to increasing CO2 follows a smooth logarithmic function which describes the ever-decreasing influence of adding more dioxide (CO2) into the atmosphere. Further, a model may describe well a certain issue or problem but may be unsuitable for forecasting or, especially, long term prediction.
Here, numerical weather models are an example. They usually make good forecasts a few days out and, occasionally, up to two weeks out. Predicting extreme weather events, a month out is the goal, but remains to be achieved. The mathematical concept of chaos creates problems. Using similar models to forecast one hundred years out is completely unrealistic. Indeed, climate models differ from weather models for that very reason.
Climate models assume that increasing amounts of CO2 will cause increasing radiative forcing, which in turn will cause the temperature to rise (by how much?), and in turn cause more (how much more?) water evaporation and therefore higher (how much higher?) greenhouse forcing from H2O. More heating should occur at the poles because the ratio of CO2 to atmospheric H2O is higher. Tack on a number of assumptions about vegetation, storms, albedo of the land (as opposed to that of clouds that are assumed not to change), and so forth, and the result is hundreds of scenarios from a handful of different models. The results are all over the map, with precious little contact with reality.
There is a mental process needed to create a realistic model to address an issue or problem. The steps below are not all inclusive, but are suggestions:
- Identify the problem, precisely
- Identify the best data, controlled experimental data preferred to observational data
- Design the model using the best data available
- Systematically test the model against the best data available
- Systematically test the model against other data
- Test the model for predictive capability
- Continue to test the model with changing data
Perhaps the biggest deficiency of climate modeling is the failure to use the best data available – atmospheric temperature trends.
These data clearly demonstrate that current atmospheric warming is not dangerous. Atmospheric temperature trends and additional greenhouse gases are issues that need constant attention, but not drastic regulation. See links under Challenging the Orthodoxy and https://www.worldometers.info/coronavirus/#countries
Examples of Modeling: There are several interesting examples of trying to develop a model for the Covid-19. Russian-American engineer Dmitry Orlov starts with describing the growth of bacteria in a petri-dish with food, agar. At first the bacteria will grow explosively, then slow down and eventually stop when all the agar is consumed. The growth rate is proportional to the number of bacteria present, but also to the amount of agar not covered by bacteria. The growth pattern can be described as a logistic function, which can be used to describe pandemics. However, Orlov cautions:
“Mathematical models can be arbitrarily complicated and, as an immediate consequence, arbitrarily wrong. It is possible to fit a polynomial to just about any data just by adding enough terms to it, but the predictive value of such an exercise is pretty much nil. The logistic model is simple. It uses just three parameters: midpoint, maximum and growth rate. And it models real, physical phenomena that are ubiquitous in nature: exponential growth and exponential [logarithmic] saturation.” Boldface added
After adjusting for the highly questionable data from China, Orlov estimates (guesses) when the midpoint of the pandemic will be reached. Probably, this cannot be established until after the worst is over and it depends on the success of the efforts to control exposure. Another issue is the upper bound – the number of new deaths. Again, this will not be known until after the worst is over.
For a different model, on his blog, ScienceBits, Physics professor Nir Shaviv, an advocate of Svensmark’s cosmic ray hypothesis is posting a series modeling COVID-19, with a time variable infection rate. He recognizes the importance of time, which is frequently forgotten by those making forecasts using models.
For example, those predicting Florida will soon drown use graphics showing much of Florida was once flooded. During the last interglacial, about 120,000 years ago, much of Florida was covered by oceans. Limestone about 120,000 years old has been mined well above today’s sea level. So, one can “predict” that if the present warm period continues, much of Florida will be submerged. The issue is when – 1,000 years from now, 10,000, or 100,000? See links under Science, Policy, and Evidence, and Models v. Observations.
Examples of Erroneous Modeling: On his blog, The Pipeline, Tom Finnerty has an interview with Ross McKitrick, who with Steve McIntyre demolished the infamous “Hockey-stick” developed by Michael Mann, et al., and featured in the Third Assessment Report of the IPCC (AR-3, 2001). McIntyre tried to replicate the hockey-stick and found significant errors. Of course, the IPCC and its followers ignore the publications of McIntyre and McKitrick. McKitrick states:
“I think there are going to be some reckonings, especially for the climate modelling industry. They’ve got some big failures to deal with. And that wouldn’t be a problem if people understood that models of any kind, including climate models, are really study tools. They’re ways of trying to understand the system, because the system is too complicated to figure out, so you build a simplified climate model of it and you try to figure out how that works and you hope that you learn something. But when it’s set up as, this is a forecasting tool that we can make precise calculations with and base policy decisions on, then we’re entitled to ask ‘Well, how good of a forecasting tool is this?’ And they don’t work very well for that.” Boldface added
Finnerty concludes the report with:
“Predictive models are comforting, because they make us feel like we know what is going to happen, and we can act accordingly. But sometimes the real world, in all of its messy unpredictability, intrudes. Here’s hoping that our adventure with the WuFlu teaches us to be a little more cautious about throwing everything away on an incomplete data set.”
In a review of “Radical Uncertainty: Decision-Making Beyond the Numbers” by economist John Kay and former head of the Bank of England Mervyn King, journalist Joseph Sternberg states:
“We’re in the grip of a global pandemic that we don’t understand and must make immediate choices that balance the demands of our health against the needs of our economy.
“The main advice to emerge from this book is: Don’t ask an economist. Economics has claimed for itself the right to address health policy and many other issues outside its usual orbits. ‘Radical Uncertainty’ reminds us how inappropriate that is. Chemists, plumbers and doctors identify problems within their subject areas, then develop tools with which to solve them. Economists appear unbidden on any doorstep they please with a box of mostly useless tools in search of problems.
“Their field, they note, is dominated by probabilistic methods. Politicians and their advisers assess risks with the aid of statistical tools derived from games of chance, in the hope that scientifically quantifying risk will allow them to make intelligent trade-offs about the future. “For more than half a century a single approach to rational choice under uncertainty has dominated economics,’ the authors write. ‘Agents optimize, subject to defined constraints. They list possible courses of action, define the consequences of the various alternatives, and evaluate these consequences. Then they select the best available option.’
“There’s a place for those tools, but economics habitually overreaches. Modern economists assume that whatever outcome their models predict must be axiomatically rational. When human beings fail to act according to these predictions, it is taken as a failure of the people, not the model.
“This insulting assumption, Messrs. Kay and King point out, is at the heart of microeconomics’ behavioral turn and the proliferation of “nudge” quackery in policy-making circles. The same tic enters macroeconomics as an appeal to exogenous shifts or shocks to explain economic crises the models didn’t see coming or about which economists simply have chosen not to fret.”
After stating life is not a game of chance, the reviewer discusses that often difficult decisions need those who can assess decisions that are politically tolerable, not mathematically determined, for there is no optimal solution. Then the reviewer concludes with:
“If you’re radically uncertain about what to do, doing nothing is often the best option.
“‘Corporate-strategy documents, they note, are designed to lend a false air of probabilistic precision to what is at best a guess about the market. Economists measure the economic impact of public-works projects by feeding invented numbers into faulty models, deriving outputs that enter the public realm with an undeserved aura of certainty.’
See links under Model Issues and Article # 1
Epistemic Trespassing: Some “orthodox” climate scientists attempt to discredit other scientists who work in climate studies. Comments by Will Happer and Freeman Dyson were dismissed because “they are not climate scientists” even though they understand quantum theory, which is needed to understand the greenhouse effect: how certain molecules in the atmosphere can slow the loss of energy from the earth’s surface. McIntyre and McKitrick were dismissed because “they are not climate scientists,” though they showed that the statistical techniques accepted by the IPCC gave misleading results.
On her blog, Climate Etc., Judith Curry brings up another trick – Epistemic Trespassing
“Epistemic trespassers are thinkers who have competence or expertise to make good judgments in one field but move to another field where they lack competence—and pass judgment, nevertheless. We should doubt that trespassers are reliable judges in fields where they are outsiders.’ In other words, stay in your lane.”
After being informed of this, McIntyre responded
“Any discipline, such as Mannian/PAGES2K temperature reconstructions, which bases its results on ex post screening on industrial scale, necessarily produces dross on an industrial scale and is not actual ‘expertise.’ Shouldn’t be contentious.”
See links under Seeking a Common Ground.
Humidity: The 1979 Charney Report speculated a major increase in water vapor from a CO2-caused warming, would greatly amplifying the modest warming CO2 would cause. This significant water vapor amplification has not been found, and most global climate models, because they still contain this assumption, overestimate atmospheric warming by 2.5 to 3 times. After reviewing a paper by Demetris Koutsoyiannis published in Hydrology and Earth System Science, Paul Robeson wrote:
“He [Koutsoyiannis] finds that while there are fluctuations on short- and long-term time scales, humidity is only increasing by about one-third the rate predicted in climate models, and overall hydrological intensity is going down, not up.”
See links under Challenging the Orthodoxy.
Greatly Improving Air: There is great distortion regarding the improving health effects from decreasing economic activity. Photos taken over polluted areas of China are asserted as global improvements, but they are not. The issue is regional conditions. Today, thanks to pollution control devises, the US has extremely clean air by historical standards, perhaps, the cleanest air since Europeans arrived in great numbers with their “miracle” tool, the axe. A web site on NASA (as different from NASA-GISS) states:
“When we talk about ‘air pollution,’ we’re referring to chemicals or particles in the atmosphere that are known to have negative health effects on humans. The Clean Air Act of 1970 established legislation that requires the tracking of six of those pollutants — nitrogen dioxide (NO2), ground-level ozone, carbon monoxide, particulate matter (microscopic specks of solid or liquid material in the air), sulfur dioxide, and lead. Satellite instruments are measuring all of these except lead.”
“’We’ve been able to show that since 2004, NO2 levels have dropped as much as 50% depending on what metropolitan area we’re talking about. In fact, the air in the United States is now the cleanest it has been in the modern industrial era,’ Haynes said.”
It should be noted that the satellites measuring visible pollution cannot see through clouds. See links under Measurement Issues – Atmosphere.
How Much? For years, TWTW traced the US spending on climate science using reports by the GAO and the Congressional Research Service, then estimates from the White House. These reports stopped during the Obama Administration. Using realistic estimates of US subsidies for wind and solar, TWTW had estimated the spending exceeded the total spending on the entire Apollo Project ($200 billion). The spending has continued to increase, but the estimates are not well substantiated. On his blog, Tom Finnerty presents more recent estimates, in part based on a 2018 report by Stephen Moore of The Heritage Foundation. See links under Funding Issues.
SEPP’S APRIL FOOLS AWARD
SEPP is conducting its annual vote for the recipient of the coveted trophy, The Jackson, a lump of coal. Readers are asked to nominate and vote for who they think is most deserving, following these criteria:
- The nominee has advanced, or proposes to advance, significant expansion of governmental power, regulation, or control over the public or significant sections of the general economy.
- The nominee does so by declaring such measures are necessary to protect public health, welfare, or the environment.
- The nominee declares that physical science supports such measures.
- The physical science supporting the measures is flimsy at best, and possibly non-existent.
The eight past recipients, Lisa Jackson, Barrack Obama, John Kerry, Ernest Moniz, John Holdren, Gena McCarthy, Jerry Brown, and Christiana Figueres are not eligible. Generally, the committee that makes the selection prefers a candidate with a national or international presence. The voting will close on June 30. Please send your nominee and a brief reason why the person is qualified for the honor to Ken@SEPP.org. Thank you.
Number of the Week: 2 cents: According to the American Lung Association:
“In 2015-2016, only 2 cents of every dollar that states received from their settlement with tobacco companies went to smoking cessation classes and public health programs for those affected with tobacco related illnesses.”
What happened to the other 98% of moneys collected? And corporations that sell products that people are willing to buy are called greedy? See links under Litigation Issues.
- ‘Radical Uncertainty’ Review: The Dismal Overreachers
Economists have claimed the right to address many issues outside their discipline’s orbit. This book reminds us how inappropriate that is.
By Joseph Sternberg, WSJ, Apr 12, 2020
Discussed above in the This Week section
- How Environmental Movement Plans to Leverage the Coronavirus Pandemic
Activists are pressing governments to tie tougher rules on emissions to post-pandemic stimulus aid
By Sarah McFarlane, WSJ, Apr 6, 2020
TWTW Summary: After falling for the environmental trick that falling air pollution over China is indicative of what is occurring world-wide, the author brings up some of the other tricks environmental advocates are preparing. She states:
“One hopeful development from the coronavirus pandemic: Global air quality is improving dramatically as the outbreak sends many countries into lockdown, climate scientists say.”
“Many researchers, intergovernmental organizations and activists hope the world can learn lessons from the insights the pandemic offers regarding human impact on the environment, and groups including Greenpeace, the International Energy Agency and the World Resources Institute are seizing the crisis as an opportunity to press governments to make industrial stimulus packages contingent on modernizing energy systems.” [Boldface added]
After a few examples the author states:
“A slowdown in activity during the 2009 economic downturn reduced carbon emissions and air pollution, but emissions rose 6% the following year, data from the International Energy Agency showed, as governments unleashed stimulus programs to reinvigorate growth.
“A decade later the mood is different. Many activist groups and climate scientists feel encouraged by the Paris agreement on climate goals signed by governments in 2015 and a growing industrial impetus to reduce carbon emissions.
“This time around, governments are more engaged on climate issues and under pressure to meet national targets, which could influence the format of the trillions of dollars of stimulus packages expected to come.
“‘We are asked by many governments around the world to give them advice on how they can shape the energy component of these stimulus packages in order to boost the energy resilience and accelerate the energy transition,’ said Fatih Birol, executive director at the IEA.
“Greenpeace U.K. signaled that it plans to press the U.K. government to be tough on companies that receive stimulus funds. ‘Any loans must come with strings attached to reduce emissions so that in the months to come the government can steer high-carbon industries toward the cleaner, healthier and more resilient future we all need,’ said Fiona Nicholls, a climate campaigner for the group.”