Some Dilemmas of Climate Simulations

By Wallace Manheimer – Re-Blogged From WUWT

A great deal of the recommendation that the world should modify its energy infrastructure to combat climate change, costing tens to hundreds of trillions of dollars, is based on computer simulations. While this author is not what is called a ‘climate scientist’, a great deal of science is interdenominational, and experience from one field often can fertilize another.  That is the spirit in which this opinion is offered.  The author has spent a good part of his more than 50-year scientific career developing and using computer simulations to model complex physical processes.   Accordingly, based on this experience, he now gives his own brief explanation of his opinion, on what computer simulations can and cannot do, along with some examples. He sees 3 categories of difficulty in computer simulations, where the simulations go from mostly accurate to mostly speculative.  He makes the case that the climate simulations are the most speculative.

First consider the case where the configuration and equations describing the complex system are known, and, where the system can be modified in known ways to test the accuracy of the simulation in a variety of circumstances.  An example of this is the development of the gyrotron and gyroklystron.  These are powerful microwave tubes at high frequency.  They are based on giving an electron beam energy transverse to the guiding magnetic field, and tapping this transverse energy to produce the radiation.  In the last 2 decades of the 20th century, I was involved in the theoretical, simulation, and experimental part of this effort.  I participated in writing one of the first simple simulation schemes capable of examining the nonlinear behavior of the electron beam coupled to the radiation (1). The simulations schemes have become more and more complex and complete as the project developed.  The project, and the simulations were successful.  Figure (1) is a plot, with its caption, of power and efficiency of a gyroklystron as calculated by simulation, and the experimental results taken from (2).  Clearly the simulations were successful.  Gyrotrons have been used now for heating fusion plasmas, and gyroklystrons have been used to power W band (94GHz) radars.  Figure (2) is a photo of the 10 KW average power, 94 GHz radar WARLOC at the Naval Research Lab.  Prior to gyroklystrons the highest average power 94 GHz radar was about 1 Watt.

Figure 1: Numerical simulations of the power and efficiency of a 4 cavity gyroklystron and the actual measurements. Clearly the simulations are reasonably accurate.
Figure 2: The NRL WARLOC radar

Second let us say that the configuration is well known, and can be varied in a controlled way, but the relevant physics is not.  In my career I have spent a considerable amount of time working on laser fusion. The largest effort is the National Ignition Facility at the Lawrence Livermore National Lab (LLNL) in Livermore California. The lab built a gigantic laser, costing billions, which produces about a megajoule of ultraviolet light energy in a pulse lasting several nanoseconds.   Figure (3) is a photo of the laser bays, in a dedicated building, roughly half a kilometer in each direction.   Their target configuration places the millimeter size target in the middle of a cylindrical can called a hohlraum.  The laser is focused on the interior walls of hohlraum, producing X-rays which impinge on the target. The target compresses and heats, so that fusion reactions can take place.  LLNL has done many computer calculations and simulations of the process and concluded that fusion energy should be ten times the laser light energy, i.e. Q=10 (3,4). When they did the experiment, they found, to their dismay, that Q ~ 10-2, on a good day. Their estimate missed by more than a factor of 1000! What went wrong?  The problem is that there is a great deal of physics going on in the target, which is not understood well. For instance there are instabilities driven by the interaction of the laser with the plasma; instabilities of the fluid implosion, generation of a small number of extremely energetic electrons, generation of a small number of extremely energetic ions, generation of intense magnetic fields, unpredicted mixing of various regions of the target, expansion of the hohlraum plasma, all in a very complex and rapidly changing geometry… Don‘t get me wrong; LLNL is a first class lab, which hires only the very best scientists and computer engineers. The problem is that the physics is too complex; too unforgiving.  In the 8 years since the end of their ignition campaign, when they had hoped to achieve Q=10, they have only succeeded in getting the Q~ 2-3%, better, but still nowhere near what they had promised in 2010.

It is worth noting that there was considerable opposition to the construction of NIF (5), opposition based on 2 suppositions; first that LLNL could not get the laser to work, and second that the target and its interaction with the laser was much too complex, there were too many aspects of the physics which were uncertain and could not be modeled by their simulations.   The skeptics were half wrong and half right.  LLNL got the laser to work well (years late and billions over the original budget), and this this is certainly a significant achievement.  However, the skeptics were correct in their assessment that the that the target would not fuse.

Figure 3: The laser bays at the NIF laser at LLNL

Now let us go to the third level of difficulty. There are cases where neither the configuration, nor the basic physics needed for a simulation are well known. Add to that the fact that unlike with NIF, it is not possible to repeat experiments in any controlled way. When this author first got to NRL, the problem we were all working on was to figure out plasma processes going on in a nuclear disturbed upper atmosphere, or High Altitude Nuclear Explosions (HANE). When a nuclear bomb, or multiple nuclear bombs explode in the upper atmosphere, the atmosphere forms ionized plasma. With the strong flows generated there, the behavior is not governed by conventional fluid mechanics, but by the nonlinear behavior of plasma instabilities. The key was to work out a theory of these extremely complicated processes by particle simulations as well as theory. These results would then be put into the other computer codes used in the radar, tracking, communication, electronic warfare, etc. An unclassified version of our conclusions is in (6).  Was our theory correct? Who knows. Will anyone ever do the experiment? Hopefully not. If the experiment is done and the theory does not work, will there be an opportunity to continue to work on it and improve it?  Nobody will be alive to do it.

This author makes the case that the climate computer simulations, on which the governments have spent billions, are of this third level of complexity, if not even more complex.  The motivation is that we are presumed to be in a ‘climate crisis’ which is an existential threat to humanity.  For these climate simulations, the basic physical system is almost certainly much more complicated than the LLNL laser target configuration, for which the simulations failed. The scientists at Livermore at least know what they are starting out with and can vary it. First, there is there is the fact that these climate simulations involve the entire earth. To do the simulations, the earth is broken up into a discrete grid, both around the surface and vertically. Since the computer can only handle a fine number of grid points, the points are dozens of miles apart horizontally. But many important atmospheric effects are on a much smaller scale. For instance, cities are usually warmer than the surrounding countryside, so the computer calculation would have to somehow approximate this effect since it occurs on a space scale smaller than the grid spacing. Then there is a great deal of uncertain physics. The effect of clouds is not well understood.  Also, what effects arise from the deep ocean, the effect of CO2 on water vapor, aerosols and their content and size, cosmic rays, turbulent ocean, turbulent atmosphere, uncertain initial conditions, variations in solar radiation, and solar flares? What impurities are in the atmosphere and where and when were they here or there …..?  All these effects are handled by a method called ‘tuning’ the code.  When I was an undergraduate, we used to call these ‘fudge factors’.  For years this has been kept under wraps by the various code developers.  More recently some of this has been discussed in the scientific literature (7).  The different modelers use very different tunings. It does not inspire confidence, at least with this experienced scientist.

With that introduction to climate simulations, let’s see how good these simulations are at predicting the earth’s rising temperature.  Figure (4) is a slide presented in congressional testimony by John Christy of the University of Alabama at Huntsville, along with his caption (8).  Also, on the graph are the actual temperature measurements.  Christy, with Roy Spencer are the two scientists there mainly responsible for the obtaining and archiving the space-based earth temperature measurements.  The fact that he prepared this data for congressional testimony indicates to me that he took extraordinary care in setting it up.   I certainly believe it.  Notice that all of the curves vastly overestimate the temperature increase.  As Yogi Berra said, “It’s tough to make predictions, especially about the future”.  The curves cannot be making random errors, if they were, there would be about as many that underestimated the temperature rise.  Hence it seems that a bias for a temperature increase is built into the models.      Possibly there is other more recent literature showing perfect agreement from 1975 to the 2020, and then predicting disaster in 30 years.  But how credible would that be in the light of Christy’s viewgraph?  It brings to mind John von Neuman’s famous parable “With four parameters I can fit an elephant, and with 5 I can make him wiggle his trunk”.  Figure 5 from (9) is a plot of the elephant wiggling his trunk, done with 5 parameters.

Figure 4: The viewgraph John Christy on the accuracy of numerical models for predicting temperature rise, which he presented in congressional testimony.
Figure 5: The elephant wiggling its trunk

Yet at least in part due to these faulty simulations, there is a large move afoot to switch our energy from coal, gas, oil and nuclear to solar and wind.  The cost of this would be astronomical.  Figure (6) is a graph of the worldwide cost of the switch to solar and wind in the last few years (10).  The costs are in the neighborhood of half a trillion $$ per year.   Yet according to (10), this is not nearly enough.  Here is a quote

While climate finance has reached record levels, action still falls far short of what is needed under a 1.5 ˚C scenario. Estimates of the investment required to achieve the low-carbon transition range from USD 1.6 trillion to USD 3.8 trillion annually between 2016 and 2050

Figure 6: The global climate finance $$$ from 2013 to 2018

Of course, this author realizes that there is a place for computer simulations in earth science, just like in every other science.   But is it really worth this kind of societal effort, ~$50-100T to attempt to accomplish this energy transformation, which probably is not even be possible (11,12), and likely is not even needed (13)?   Especially where it is based, at least in part on faulty computer simulations.   Furthermore, where windmills and solar panels take up a great deal of land, and use a tremendous amount of materials, concrete, steel, rare earths…. how sure can we be that the enormous effects of this transformation would be environmentally beneficial?  Where are the simulations that examine these aspects?   Perhaps the climate modelers should have a little more humility and a little less hubris and certainty, in the face of the transformation they are telling society to make.  This costs real money!

It is interesting, that as Christy points out, there is one curve that got it about right; the Russian model!  In 1995, in the Yeltsen era in Russia, I took an 8-month sabbatical as a visiting professor in the physics department of Moscow State University.  I learned there that Russia has had a very strong, independent scientific tradition dating at least since the time of Peter the Great, when he set up the Russian Academy of Science.  Even during the Communist era, the Academy was as independent of party control as any organization there could be.  So how could the Russian modelers have gotten it right when all the western models all got it wrong?

My answer perhaps descends into speculation and might be judged frivolous, but it seems to the author to be well worth recording.  In the United States and the west, we do not arrest or execute dissident scientists, as the Russians did under the worst abuses of Stalin.  However, we do punish dissident scientists in other ways, we simply cut off or deny their funding.  In fact, most vocal skeptics are retired or emeritus; they do not have to worry about their next grant.  In my April 2020 essay in Forum on Physics and Society https://www.aps.org/units/fps/newsletters/202004/media.cfm,  I listed 9 expert skeptical scientists in the area of climate science (several in the NAS).   Except for Spencer and Christy, who perform indispensable service for NASA and NOAA, I believe none are able to get any funding for their research.  In fact, none even seem to be able to publish in the standard scientific literature; they use blogs.  I know of one expert at an Ivy League university, an NAS member, who expressed skepticism of the standard dogma (14).   He was in charge of a large project in biophysics, which abruptly got canceled (15).  He stopped being a public skeptic then, and told me he did so, in 2015. Was his project termination because of his climate heresies?  Who knows, but it happened about when his climate stands gained publicity.

It is unlikely Russia has the same worry about climate change that we do.  Perhaps Russian scientists do not have to ‘tune’ their codes to obtain politically correct results.  BTW, if anyone is interested in my experience in Russia, I wrote a diary as a pdf file.  Email me and I will send it to you.

To conclude, computer simulations are a vital and powerful scientific (and societal) tool.  But in utilizing them we should be cognizant of the fact that the ‘tuning’s we do, and physics uncertainties we approximate, are weakening links to the chain; a chain which is only as strong as its weakest link.  If these tunings allow the simulation properly calculate the known data, that does not mean they will do so as new data comes in.  Remember the elephant.  We should be especially cognizant of the fact that that these ‘tunings’ might well be to please sponsors.   And of course, we should never forget GIGO.

CONTINUE READING –>

One thought on “Some Dilemmas of Climate Simulations

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s