By Ken Haapala, President, The Science and Environmental Policy Project
Brought to You by www.SEPP.org
Bureaucratic Science: TWTW has used the term bureaucratic science, which may have confused some of its readers. Bureaucratic science occurs when a government entity, or a similar organization, charged with applying the best science possible, drifts from its purpose and institutes policies and procedures (methodology) that are inconsistent with its mission. The brightest, most competent and conscientious people may be involved. Education level does not matter. Bureaucratic science can be considered a subset of Group Think, ably discussed by Christopher Booker, presented in last week’s TWTW.
The investigation into the Challenger space shuttle disaster unveiled how bureaucratic science slowly replaced NASA’s principal mission of human safety in space exploration. The following discussion is based largely on the observations by Nobel Laurate Richard Feynman, who served on the Rogers Commission investigating the disaster, and “The Challenger Accident,” a chapter in a NASA publication on the history of the Marshall Space Flight Center.
The discussion of the 25th Shuttle mission by NASA’s Kennedy Space Flight Center on its web site goes through the launch sequence on January 28, 1986, and is based on the film recording of the event. It states that after 37 seconds the Challenger encountered several high-altitude wind shear conditions. “The wind shear created forces on the vehicle with relatively large fluctuations.” At 59 seconds a small flame appeared on one of the Solid Rocket Boosters (SRB) which breeched the external tank containing liquid fuel, subsequently resulting in the explosion. The discussion concludes with:
“The Explosion [sic] 73 seconds after liftoff claimed crew and vehicle. Cause of explosion was determined to be an O-ring failure in right SRB. Cold weather was a contributing factor.”
Unfortunately, this conclusion ignores the bureaucratic science involved in the launch and NASA’s ignoring significant problems in the design of the joints in the Solid Rocket Motor (SRM), which are discussed in the sources mentioned above.
The solid rocket motor (SRM) of the Shuttle was made of several segments and had joints when assembled. This design required joints and seals to prevent leaks of the high temperature, high-pressure propulsion gases. The model used in the design was the well tested and highly successful Titan III intercontinental ballistic missile. However, the SRM was much larger and was designed to be reused after being refurbished. The differences in the design of the joints between the Titan III and the SRM were significant.
Ideally, when the rocket sections are assembled the result is a cylinder with a straight-line axis through the center. When the rocket is launched the cylinder and the straight-line axis should be maintained. If a straight line is not maintained, the joint is said to rotate. As a fail-safe precaution against leakage of high pressure gases in the case of joint rotation, two O-rings, (mentioned in the quote above) were used in the joints to seal them.
As Feynman demonstrated convincingly during a hearing of the Rogers Commission investigation, the O-ring material lost elasticity, resilience, when chilled by ice water. Cold weather was a contributing factor to the explosion, but not the primary cause. The primary cause was a design flaw that allowed the rotation of the joint, moving the rocket section slightly off-center, which, in turn, necessitated that the O-rings function properly. The O-rings failed when cold. However, the O-ring failure covered the design flaw.
As both Feynman and the authors of “The Challenger Accident” state, the designed flaw was revealed early in testing. The O-rings had significant erosion during both cold weather and warm weather tests. If the joints were designed properly, erosion of O-rings should not occur. The contractor performed various patches such as applying different types of putty to the joints to reduce the erosion, but the safety issue remained. As Feynman wrote:
“…The O-rings of the Solid Rocket Boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety can be inferred.
“There was no way, without full understanding, that one could have confidence that conditions the next time might not produce erosion three times more severe than the time before. Nevertheless, officials fooled themselves into thinking they had such understanding and confidence, in spite of the peculiar variations from case to case. A mathematical model was made to calculate erosion. This was a model based not on physical understanding but on empirical curve fitting.”
As missions continued with partial damage to critical components, but without drastic failure, the management of Marshall Space Flight Center became more self-assured as to the safety of the Shuttle. A sense of conformity, Bureaucratic Science, set in with O-ring erosion considered normal.
[Marshall project manager]“Mulloy, in testimony to a Senate committee, best summarized the circumstances. ‘We at NASA,‘ he said, ‘got into a group-think about this problem. We saw it, we recognized it, we tested it, and we concluded it was an acceptable risk. … When we started down that road we were on the road to an accident. ‘[footnote 126] Indeed the teleconference was a classic case of ‘groupthink, ‘a form of decision making in which group cohesion overrides serious examination of alternatives. Top level Marshall and Thiokol officials, believing the joint was safe, rationalized bad news from experts, and refused to consider contingency plans. Recognizing consensus among superiors, some subordinate engineers exercised self-censorship. Consequently, participants in the teleconference failed to communicate and find useful ways to analyze the risks of cold temperature. [footnote 127] Two personnel experts, who conducted management seminars at NASA from 1978 to 1982, argued that groupthink was not unique to Marshall and was inherent in NASA culture. They believed that internal career ladders, homogeneous professional backgrounds, masculine management styles, political pressures to downplay problems, and over-confidence resulting from a history of success had produced a quest for harmony that was often dysfunctional.” [footnote 128] p.377 [Boldface added]
In this instance, NASA’ s Bureaucratic Science, group think, led to failure in NASA’s mission and death.
As Feynman wrote in the introduction of his paper:
“It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could put a Shuttle up each day for 300 years expecting to lose only one, we could properly ask ‘What is the cause of management’s fantastic faith in the machinery?’”
As discussed in last week’s TWTW, repeatedly, John Christy has shown serious design flaws in virtually all the global climate models demonstrating their failure to properly estimate atmospheric temperatures, where the greenhouse gas effect occurs. The failure of the climate establishment to recognize its failing efforts will be discussed more fully in the next TWTW.
Apollo veterans Hal Doiron and Jim Peacock of the Right Climate Stuff Team provided valuable guidance in sources for preparing the above. However, it is solely the product of Haapala. See links under Challenging the Orthodoxy and https://science.ksc.nasa.gov/shuttle/missions/51-l/mission-51-l.html
Quote of the Week. For a successful technology, reality must take precedence over
public relations, for nature cannot be fooled. – Richard Feynman
Good Climate News: Writing in the Wall Street Journal, Holman Jenkins discusses Equilibrium Climate Sensitivity (ECS) to carbon dioxide (CO2) and an article appearing in the journal Nature stating that the standard estimate of the UN Intergovernmental Panel on Climate Change (IPCC) may be too high. Jenkins notes that the standard estimate was developed by the National Academy of Science, National Research Council in 1979. [ECS is the amount of warming occurring with a doubling of CO2, once equilibrium is obtained. An important question is: was the globe ever in equilibrium?]
“This 40-year lack of progress is no less embarrassing for being thoroughly unreported in the mainstream press. The journal Nature, where the new study appears, frankly refers to an ‘intractable problem.’ In an accompanying commentary, a climate scientist says the issue remains ‘stubbornly uncertain.‘” [Boldface added.]
Of course, the uncertainty is not expressed by the IPCC or the US Global Change Research Project (USGCRP). Jenkins asserts that the new study may be part of a trend. But, the new study is based on measurements of surface temperatures, a design flaw for estimating the effect of CO2 on temperatures. The globe is over 70% water, and no one has demonstrated a credible theory on how the greenhouse effect occurs in the oceans.
In the electronic version of the article, Haapala added the comment: The greenhouse effect occurs in the atmosphere, where the greenhouse gases are. Yet, the IPCC and most climate scientists are trying to estimate the greenhouse effect using surface temperature data, which are influenced by many other human activities, such as urbanization, farming, irrigation, etc. Repeatedly, John Christy of the University of Alabama in Huntsville has testified to the House Science Committee that the average of the climate models used by the IPCC, and others, overestimates the warming in the atmosphere by 2.5 to 3 times. No wonder these scientists have an “intractable problem.” They are looking in the wrong place for an answer. See Article # 1.
Spencer’s New Model: As discussed in the last TWTW, Roy Spencer posted his new one-dimensional model using surface temperature data (ocean data to a depth of 2,000 meters). This seemed strange since Spencer is the co-discoverer of the method of calculating temperatures from satellite data. His model showed an Equilibrium Climate Sensitivity (ECS) of 1.54 C, at the very low end of what the IPCC publishes. This week he plays further with the model. Then he reveals his purpose:
“Of course, this whole exercise assumes that, without humans, the climate system would have had no temperature trend between 1765-2100. That is basically the IPCC assumption — that the climate system is in long-term energy equilibrium, not only at the top-of atmosphere, but in terms of changes in ocean vertical circulation which can warm the surface and atmosphere without any TOA radiative forcing.
“I don’t really believe the ‘climate stasis’ assumption, because I believe the Medieval Warm Period and the Little Ice Age were real, and that some portion of recent warming has been natural. In that case, the model climate sensitivity would be lower, and the model warming by 2100 would be even less.”
Spencer is exposing a false assumption of the IPCC: without human intervention there would be little or no change to the enormously complex climate system. See link under Challenging the Orthodoxy.
Russian Influence: Students of propaganda may be amused and / or bored with the efforts to tie the Trump Administration to Russian efforts to influence the recent election. Russians have attempted to influence US elections since the 1930s, and the US has tried to influence elections in other countries before that. Now we are seeing reports covering attempts by Russians to influence decisions on pipelines, hydraulic fracturing, genetically modified organisms (foods), etc.
This is not particularly surprising, because the US has long been a major exporter of agriculture products and is becoming a major player in the international markets for oil and natural gas. Russia’s government is heavily dependent on oil and gas production. US production is causing OPEC and Russia to cut back on production, severely damaging the ability of these governments to meet their budgets. One can call the propaganda a symptom of a trade war. See links under The Political Games Continue
Very, very scary – The Cube is Coming? No doubt to alert oil producers in Canada, the Financial Post had an amusing headline on an article discussing the possibility of a new form of oil and gas extraction in the Permian shale basin in West Texas. The basin has multiple layers of hydrocarbon producing rock interspaced with impervious rock. Multiple horizontal wells, drilled into different formations, at different levels may be a cost effective way to develop in this basin. As compared with traditional methods, the surface infrastructure costs can be kept down while production increased. Will Russian media interests play up the threat? See link under Oil and Natural Gas – the Future or the Past?
Number of the Week: 750,000 pairs. Traces of guano stains seen in NASA satellite imagery led to the discovery of over 1.5 million, over 750,000 pairs, of Adélie penguins in the Danger Islands of Antarctica where they were not known to exist. Of course, some people claimed that a decline in penguins was being caused by global warming / climate change. Strange, how people will go to the ends of the earth to discover a colony of birds showing up in satellite imagery, yet ignore temperatures calculated from satellite data, which show a general cooling over Antarctica.
Reporting scientific progress would require admitting uncertainties.
By Holman Jenkins, Jr. WSJ, Feb 27, 2018
Link to letter: Emergent constraint on equilibrium climate sensitivity from global temperatue variability
By Cos, Huntingford, & Williamson, Nature Letter, Jan 18, 2018
SUMMARY: The article is discussed in the This Week Section above. The author concludes with:
“Leaving climate sensitivity uncertainties out of the narrative certainly distorts the reporting that follows. Take a widely cited IPCC estimate that “with 95% certainty,” humans are responsible for at least half the warming observed between 1951 and 2010.
“This sounds empirical and is reported as such. In fact, such estimates are merely derivative of how much warming should have taken place if the standard climate sensitivity estimate is correct. Imagine predicting an 8 before letting the dice fly, then assuming an 8 must have come up because that’s what your model predicted.
“To be clear, the U.S. and other governments have done increasingly minute and exacting work in cataloging actual climate and weather patterns. We argue here they have grossly underperformed in sorting out cause and effect. And since the press’s job is to hold institutions accountable, the output of government climate science is so poor partly because of the abysmally bad job done by reporters on the climate beat.
“No better example exists than their gullibility in the face of U.S. government press releases pronouncing the latest year the “warmest on record.” Scroll down and the margin of error cited in the government’s own press release would lead you rightly to suspect that a clear trend is actually hard to find in recent decades despite a prodigious increase in CO2 output.
“Well, guess what? Taking account of the actual temperature record and its tiny variations is exactly what the Exeter group and others have been doing in order to make progress on the 40-year problem of climate sensitivity. And they are finding less risk of a catastrophic outcome than previously thought.
2. Germany Paves Way for Diesel Ban, Dealing Blow to Auto Makers
Court allows major German cities to ban older diesel vehicles
By William Boston, WSJ, Feb 27, 2018
SUMMARY: The author states:
“A German court on Tuesday rang the death knell for certain diesel cars in a blow to the country’s flagship auto industry, which could now be forced to spend billions to upgrade or replace millions of cars.
“In a landmark decision, a federal court ruled that German cities Stuttgart and Düsseldorf could ban diesel vehicles from their streets as a way to reduce pollution, rejecting an appeal of a lower-court ruling. Ultimately, the ruling clears the way for any German city to ban older diesel vehicles and could inspire similar measures in cities around Europe, analysts said.
“The closely watched ruling on Tuesday by the Federal Administrative Court in Leipzig is likely to accelerate the demise of a technology that German auto makers have long promoted as combining solid driving performance with fuel economy and low greenhouse-gas emissions, but which has been largely discredited in the wake of the emissions-cheating scandals at Volkswagen and other car makers.
After discussing assumed health effects and presenting current differences in re-sale values of diesel vs. gasoline vehicles, the author states:
Diesel vehicles in Germany certified between 2009 and 2014 under the so-called Euro 5 emissions standard, numbering about 5.9 million vehicles, wouldn’t be subject to a ban until September 2019, the Leipzig court ruled. Vehicles certified later under the Euro 6 regime would likely be exempt from any bans.
But all other diesels in the country certified before 2009, nearly 7 million vehicles in operation today, could be subject to bans at any time. Those individuals hit hardest will be owners of the oldest diesels on the road that don’t adhere to Euro 6. That segment accounts for 2.7 million of the 15 million diesels in use, according to the German Automotive Industry Association.
The author concludes with efforts of various municipal governments to ban diesels, and local and business issues.