Weekly Climate and Energy News Roundup #342

The Week That Was: 2019-01-05,By Ken Haapala

Brought to You by www.SEPP.org, The Science and Environmental Policy Project

Quote of the Week “The formulation of a problem is often more essential than its solution, which may be merely a matter of mathematical or experimental skill. To raise new questions, new possibilities, to regard old problems from a new angle requires creative imagination and marks real advances in science.” – Albert Einstein

Number of the Week: 10% Less v. 50% Less

20, 40, and 60 Years: The greenhouse gas effect occurs in the bulk atmosphere. Greenhouse gases cannot cause surface warming if the atmosphere is not warming, and the rate of surface warming caused by greenhouse gases cannot be greater than the rate of atmospheric warming. Physicist Will Happer has estimated that if greenhouse gases are causing surface warming, the rate of atmospheric warming must by 20% greater than the rate of surface warming.

Much of what is called greenhouse gas theory, or carbon dioxide-caused (CO2) warming, was devised in the 1970s when numerical modeling became popular with the advent of improved computers. The concepts advanced in reports, such in the 1979 Charney report, projected that the small amount of warming caused by increasing CO2, as observed in laboratory experiments, would be greatly enhanced, amplified, by an increase in water vapor, the dominant greenhouse gas. This was speculation, because there were no comprehensive measurements of atmospheric temperature trends at the time.

In 1990, Roy Spencer and John Christy published a method of using data collected by satellites to estimate temperature trends for the entire atmosphere except for the extreme polar regions. The data begin in December 1978 and are verified by independent measurements using instruments on weather balloons and, subsequently, weather reanalysis data. When revealed, small errors in the initial calculations were promptly corrected. We now have 40 years of comprehensive global atmospheric temperature trends of the atmosphere, the most rigorous global temperature data existing.

Roy Spencer, John Christy, and the Earth System Science Center (ESSC) at the University of Alabama in Huntsville have reported their findings of atmospheric temperature trends since 1979. For the last 20 years, there has been no statistically significant warming of the atmosphere. For the last 40 years, atmospheric warming has been modest and not dangerous (0.13ºC/decade, 0.23ºF/decade). For the last 60 years, atmospheric temperature trends over the tropics taken by weather balloons (which 1970s climate thinking expected to warm greatly) has warmed one-half as much as climate models project. The disparity between models and observations is increasing.

It is past time for the UN Intergovernmental Panel on Climate Change (IPCC), the US Global Change Research Program (USGCRP) and other government entities to abandon the 1970s concepts, built into their models. Their thinking is no more scientifically current than the 1970s thinking that the US is about to run out of oil and natural gas.

No doubt the UN perceives that by creating a fear of CO2-caused dangerous warming, it will be receiving $100 Billion in new money it can control, thus it will not be responsive to changing scientific knowledge. However, the USGCRP and US government entities are accountable to Congress and the American public. The USGCRP web site states:

“The U.S. Global Change Research Program (USGCRP) is a Federal program mandated by Congress to coordinate Federal research and investments in understanding the forces shaping the global environment, both human and natural, and their impacts on society.”

The refusal by this program to discuss natural causes of climate change, to inflate human causes, and to exaggerate global warming is inexcusable. John Christy has testified before Congress that the models are failing, yet the USGCRP continues to use them. The failure to adjust models to ongoing advances in knowledge, facts, is a manifestation of bureaucratic science at its worst, becoming incompetent by mental stagnation. Specific issues will be more fully discussed in future TWTWs. See links under Challenging the Orthodoxy and https://www.globalchange.gov/about


Formulating the Problem: As Einstein states in the “Quote of the Week,” formulating the problem can be critical to understanding it. The issue of greenhouse gas warming has been poorly formulated by government entities. They focus on surface temperatures.

We cannot hope to understand the extent to which CO2 may be warming the earth by examining surface temperatures. These temperatures vary naturally by the movement of two fluids (the oceans and the atmosphere) against each other and irregular land masses (above and below the seas). These fluids are placed into chaotic motion by uneven heating from a varying sun, as the globe rotates and orbits the sun. Claiming that CO2 is the control knob of complex interactions is silly.

Understanding the changes in atmospheric temperatures is a necessary first step, but it is not sufficient. We have warming from increasing CO2, which inhibits the cooling of the earth by slowing the flow of energy from the surface to space, we have warming from changing water vapor, the dominant greenhouse gas. A significant issue is how much of water vapor warming is natural as compared feedback from increased CO2.

As can be seen from the atmospheric temperature trends, El Niños, which put a lot of water vapor into the atmosphere, can cause significant warming for months afterward, particularly in the Arctic. The summer highs in the Arctic are not increasing, but the temperatures during the winter and the shoulder months appear to be warming. This is probably from increasing water vapor, which can have a significant greenhouse gas effect, given that the Arctic is very cold and dry in the winter. To what extent is this increased water vapor caused by CO2 warming?

For example, one of the issues that need to be sorted out is: does increasing CO2 increase the frequency and intensity of El Niños, which have been observed from the 1500s and probably have been occurring for thousands of years. See links under Measurement Issues — Atmosphere


Failing of NCA4: On her web site, Judith Curry reviews the Fourth National Climate Assessment (NCA4) by the USGCRP. Curry begins by countering a quote from NCA4:

“’You can say I don’t believe in gravity. But if you step off the cliff you are going down. So we can say I don’t believe climate is changing, but it is based on science.’ – Katherine Hayhoe, co-author of the 4th National Climate Assessment Report.”

“So, should we have the same confidence in the findings of the recently published 4th (U.S.) National Climate Assessment (NCA4) as we do in gravity? How convincing is the NCA4?” – Judith Curry

TWTW would add that the report uses surface temperature data, which is about as valuable for finding the influence of CO2 as using calculations of earth’s gravity for gravity on the moon.

Curry goes on to write:

“I’ve just completed rereading Vol I of the NCA4. There is so much here of concern that it is difficult to know where to start. I have been very critical of the IPCC in the past (but I will certainly admit that the AR5 was a substantial improvement over the AR4). While the NCA4 shares some common problems with the IPCC AR5, the NCA4 makes the IPCC AR5 look like a relative paragon of rationality.


“Since the NCA4 is guiding the U.S. federal government in its decision making, not to mention local/state governments and businesses, it is important to point out the problems in the NCA4 Reports and the assessment process, with two objectives:


· provide a more rational assessment of the confidence that should be placed in these findings

· provide motivation and a framework for doing a better job on the next assessment report.”

In this, the first of several posts, Curry discusses the massive overconfidence in NCA4 (to TWTW it is arrogance from ignorance). Over confidence is a major problem in the global warming community. Curry concludes with:

“As a community, we need to do better — a LOT better. The IPCC actually reflects on these issues in terms of carefully considering uncertainty guidance and selection of a relatively diverse group of authors, although the core problems still remain. The NCA appears not to reflect on any of this, resulting in a document with poorly justified and overconfident conclusions.


“Climate change is a very serious issue — depending on your perspective, there will be much future loss and damage from either climate change itself or from the policies designed to prevent climate change. Not only do we need to think harder and more carefully about this, but we need to think better, with better ways justifying our arguments and assessing uncertainty, confidence and ignorance.


“Sub-personal biases are unavoidable, although as scientists we should work hard to be aware and try to overcome these biases. Multiple scientists with different perspectives can be a big help, but it doesn’t help if you assign a group of ‘pals’ to do the assessment. The issue of systemic bias introduced by institutional constraints and guidelines is of greatest concern.


“The task of synthesis and assessment is an important one, and it requires some different skills than a researcher pursuing a narrow research problem. First and foremost, the assessors need to do their homework and read tons of papers, consider multiple perspectives, understand sources of and reasons for disagreement, play ‘devil’s advocate’, and ask ‘how could we be wrong?’


“Instead, what we see in at least some of the sections of the NCA4 is bootstrapping on previous assessments and then inflating the confidence without justification.”


See links under Challenging the Orthodoxy


Changing Sahara: In his classic book, “Climate, History and the Modern World,” climate research pioneer H.H. Lamb wrote about the cooling and drying of the Sahara, and the need for more research on how climate change affected civilizations in the Indus Valley and China. It was important to understand how climate change can affect civilizations and cultures. Unfortunately, after he retired as the founder and first director of the Climatic Research Unit at the University of East Anglia, his successors became fixated on the influence of CO2, rather than realizing that it plays a minor role in climate change.

In 2018, the International Union of Geological Sciences divided the Holocene, the current epoch beginning about 11,700 years ago, into three subsections: the Greenlandian (11,700 years ago to 8,300 years ago), Northgrippian (8,300 years ago to 4,200 years ago) and Meghalayan (4,200 years ago to the 1950). The International Union of Geological Sciences recognized that a cooling at the beginning of the Meghalayan severely affected cultures and civilizations at the time. This supports the findings of Lamb. Now, further research supports his work.

A new paper published by Science Magazine finds that the Sahara varied between a wet place, as Lamb showed for early in the Holocene, and the current desert over the past 240,000 years with periods of about 20,000 years. This variation indicates that the Intertropical Convergence Zone, bringing monsoon rains, varies with regularity. Lamb suggested as much in his book.

The 2008 NIPCC report, “Nature, Not Human Activity, Rules the Climate,” edited by Fred Singer, presents data indicating that this variation may be due to changing solar activity as suggested in the Svensmark hypothesis. Cloudiness varies with changing galactic cosmic rays, which are moderated by the strength of the solar wind and solar magnetic field. The correlation between data from a cave in Oman, in the Intertropical Convergence Zone, and Oxygen-18 is exceptionally close over a period of over 3,000 years. (pp 11-13)

Further, the new study indicates that our species evolved during a period of significant climate change, and adaption is one of the characteristics of the human race. See links under Challenging the Orthodoxy – NIPCC, and Changing Climate


UK Electrical Costs: Writing in the Global Warming Policy Forum, John Constable expresses concern that electricity prices are increasing significantly in the UK without any significant increase in wholesale energy costs. The big increases in costs are from energy and climate policy. Constable writes:

Britain’s electricity suppliers are reported to be considering further increases in prices to consumers. Climate policies are largely responsible for such price increases, yet government is more than content to let private energy companies and their shareholders take the blame. Intoxicated with subsidies, the electricity sector has hitherto colluded in this obfuscation of causes, but the introduction of the domestic electricity price cap may change this situation, encouraging energy suppliers and indeed all businesses, to name government as the guilty party.

The issue is similar to what was discussed in the December 15 TWTW, based on an article by Donn Dears and data from the US Energy Information Administration (EIA). From 2005 to 2017 US Electricity Net Generation went down by 0.5%. Yet in many states, electricity rates are going up significantly. This issue is government policies promoting wind and solar generation that is not needed and must be stabilized. In the US, the cost of stabilization falls on those providing transmission services and is paid by all.

It is becoming increasingly clear that wind and solar are unneeded, expensive forms of electricity generation – inferior goods. During times of war, producers of inferior goods are called war profiteers. Should promoters of wind and solar be called “green war profiteers”?

See December 15, 2018, TWTW and links under Questioning European Green


Sue the Government? Article # 2 is a review of ‘Judicial Fortitude” by Peter Wallison. It gives an indication of how difficult it is to sue government agencies in federal court because the courts are biased toward the agencies. Facts are often irrelevant. See Article # 2.


Number of the Week: 10% Less v. 50% Less: Article # 1 from the Wall Street Journal, discusses how promoters overestimate returns from proposed oil and gas drilling operations. As the treasurer of one oil company said about projections of returns: ‘It’s not a science,’ said Richard Robuck, the company’s treasurer. ‘It’s more of an art.’ [Boldface added]

The article reported that a review of some 16,000 wells by 29 companies are producing 10% less oil and gas than forecasted.

Using established back-testing (hindcasting) methods, McKitrick and Christy found that over the 60-year record of weather balloon data from the tropics, the atmosphere has warmed some 50% less than the climate models predict. Yet, the USGCRP expects the public to believe its reports are based on physics when the models it uses overestimate atmospheric warming by 100%? See links under Challenging the Orthodoxy and Article #1.



1. Fracking’s Secret Problem—Oil Wells Aren’t Producing as Much as Forecast

Data analysis reveals thousands of locations are yielding less than their owners projected to investors; ‘illusory picture’ of prospects

By Bradley Olson, Rebecca Elliott and Christopher M. Matthews, WSJ, Jan 2, 2019


SUMMARY: In criticizing the promotion of oil and gas wells the reporters state:

Thousands of shale wells drilled in the last five years are pumping less oil and gas than their owners forecast to investors, raising questions about the strength and profitability of the fracking boom that turned the U.S. into an oil superpower.


The Wall Street Journal compared the well-productivity estimates that top shale-oil companies gave investors to projections from third parties about how much oil and gas the wells are now on track to pump over their lives, based on public data of how they have performed to date.


Two-thirds of projections made by the fracking companies between 2014 and 2017 in America’s four hottest drilling regions appear to have been overly optimistic, according to the analysis of some 16,000 wells operated by 29 of the biggest producers in oil basins in Texas and North Dakota.


Collectively, the companies that made projections are on track to pump nearly 10% less oil and gas than they forecast for those areas, according to the analysis of data from Rystad Energy AS, an energy consulting firm. That is the equivalent of almost one billion barrels of oil and gas over 30 years, worth more than $30 billion at current prices. Some companies are off track by more than 50% in certain regions.


The shale boom has lifted U.S. output to an all-time high of 11.5 million barrels a day, shaking up the geopolitical balance by putting U.S. production on par with Saudi Arabia and Russia. The Journal’s findings suggest current production levels may be hard to sustain without greater spending because operators will have to drill more wells to meet growth targets. Yet shale drillers, most of whom have yet to consistently make money, are under pressure to cut spending in the face of a 40% crude-oil price decline since October.


Companies whose wells appear to lag behind forecasts, according to the analysis, include Pioneer Natural Resources Co. and Parsley Energy Inc., two of the biggest oil and gas producers in the Permian Basin of West Texas and New Mexico. The Journal’s review didn’t include some leading producers, such as Exxon Mobil Corp. , because they didn’t make shale-well projections.


Pioneer, Parsley and several other companies disputed the findings, saying the third-party estimates used by the Journal differ from their forecasts on key points such as the likely lifespan of shale wells.

After discussing what would be a proper metric, standard of measurement, state:

“The production forecasts made by many companies were ‘dangerous’ because they were based on a small population of wells, and the performance of individual wells varies significantly, said Norman MacDonald, a natural-resource specialist at asset manager Invesco Ltd.

“‘Companies were able to high-grade the numbers, show those to Wall Street, and the stock price went up accordingly,’ said Mr. MacDonald, a portfolio manager who has urged shale companies to prioritize profits over production growth. ‘Geology doesn’t line up with Excel spreadsheets too well, unfortunately.’


Pioneer, Parsley and several other companies disputed the findings, saying the third-party estimates used by the Journal differ from their forecasts on key points such as the likely lifespan of shale wells.


Some companies, including major North Dakota producer Whiting Petroleum Corp. acknowledged the forecasts can be unreliable and said they were moving away from providing such estimates.


“Another North Dakota driller, Oasis Petroleum Inc., said the projections it provided in investor presentations were estimates made as it tested drilling in vast tracts, including areas it has since abandoned. ‘It’s not a science,’ said Richard Robuck, the company’s treasurer. ‘It’s more of an art.’ [Boldface added]


2. ‘Judicial Fortitude’ Review: Time for Congress to Do Its Job

Imagine a world where the legislative branch actually legislates, courts interpret laws and executive agencies faithfully execute them. Yuval Levin reviews ‘Judicial Fortitude’ by Peter J. Wallison.

By Yuval Levin, WSJ, Jan 2, 2019


SUMMARY: In reviewing “Judicial Fortitude” the editor of National Affairs states:

“The American constitutional system is out of order. The basic shape of its dysfunction has been clear for decades, though its causes may seem obscure. At the core of the problem is the emergence, over the course of a century, of a fourth branch of government neither conceived by nor desired by the framers of the Constitution: a network of administrative agencies that combine legislative, executive and judicial powers and therefore threaten the integrity of the constitutional framework and the basic rights of the American people. This fourth branch was the brainchild of the early progressives and has generally advanced the agenda of the left, so conservatives have long been wary of it.


“But conservatives often misdiagnose the process by which the administrative state has arisen. We emphasize the hyperactivity of the executive and judicial branches, and these are certainly part of the problem. But hiding in plain sight is a deeper cause: the willful underactivity of the legislative branch. In an effort to avoid hard choices and shirk responsibility, Congress enacts vague statutes that express broad goals, empower executive agencies to fill in the practical details, and leave courts to clean up the ensuing mess. The result can look like executive overreach and judicial activism, but the root of the problem is legislative dereliction.


“To see that, however, is not yet to propose a solution. Congress is derelict because its members choose to be, so what can constitutionalist reformers do about it? Peter Wallison, a conservative legal scholar and senior fellow at the American Enterprise Institute, has stepped forward with a persuasive answer, and at just the right time. His book ‘Judicial Fortitude’ argues that the dereliction of Congress is enabled by the failure of the courts to enforce the separation of powers, and so to insist that only Congress can make laws.


“Deference to the elected branches (and therefore to the will of the people) can be a judicial virtue. But in setting out the purpose of the courts, in Federalist 78, Alexander Hamilton argued that judges would also sometimes have to resist popular pressures that might deform the constitutional system. In the passage from which Mr. Wallison draws his title, Hamilton wrote that ‘it would require an uncommon portion of fortitude in the judges to do their duty as faithful guardians of the Constitution, where legislative invasions of it had been instigated by the major voice of the community.’ The particular crisis we now face can be hard to perceive because it takes the form of a delegation of legislative authority to administrative agencies. But it is no less a deformation of the system, because the courts acquiesce to this delegation.


“Mr. Wallison’s timely argument comes at the outset of what looks likely to be a period of conservative dominance of the federal courts—especially the Supreme Court. Yet conservatives have been divided over what originalist judges should prioritize. Should restraint be their watchword, so that courts allow public policy around disputed issues to be set by politicians answerable to voters? Or should originalist judges actively defend individual rights against encroachment by politicians who use public power to trample them?


“Mr. Wallison sidesteps that debate by insisting that a certain kind of judicial activism is actually a necessary precondition to judicial restraint and to any form of originalism: Judges must make sure that each branch of government does no more but also no less than the job the Constitution assigns it. ‘If Congress were permitted to delegate its exclusive legislative authority to the administrative agencies in the executive branch,’ he writes, ‘the separation of powers would be a nullity and the dangers to liberty envisioned by the Framers could become a reality.’ To avoid that, judges must insist that Congress engage in actual legislating by preventing it from handing over its power to regulatory agencies.


“This would involve, in his telling, putting real teeth behind the doctrine of nondelegation, which the courts have sometimes articulated but never really enforced. And it would involve reversing a set of Supreme Court precedents—especially Chevron v. Natural Resources Defense Council (1984) and Auer v. Robbins (1997)—that require courts to defer to administrative-agency interpretations of statutes, even when those interpretations are dangerously vague and expansive. Rather than agencies deciding what laws mean in contested cases, Mr. Wallison insists, courts must decide, and in ways that compel Congress to pass statutes that are less vague and more prescriptive. In other words, Congress should actually legislate, courts should really interpret laws and executive agencies should faithfully execute them. Imagine that.


“Giving concrete form to this ‘nondelegation doctrine’ remains an implausible prospect—as Supreme Court majorities of every flavor over two centuries have failed to hand down such opinions. But curbing the deference to regulators granted by Chevron and Auer is not only imaginable but also downright likely given the Supreme Court’s new majority. In fact, the Court last month accepted for review a case that might open a path toward the reversal of Auer in 2019. Kisor v. Wilkie involves an obscure question about Veterans Affairs disability benefits, but the Court has clearly taken it up to reconsider the wisdom of Auer, and Mr. Wallison’s argument would strongly encourage the justices to take back the role of the courts as interpreters of law in a way that could move Congress to also reclaim its own proper role.


“‘Judicial Fortitude’ is a wise, important and accessible manual for the badly needed revival of our constitutional system. The challenge Mr. Wallison describes is immense, but the appeal of his project is that it offers protections against both an overactive and an underactive Court. It sets out not policy goals but constitutional ones. It would respond to congressional dereliction not by having the Court take up an agenda Congress won’t advance but by pursuing the restoration of our constitutional architecture: Mr. Wallison offers up judicial fortitude as a way to recovering meaningful legislative responsibility.”



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s