By Ken Haapala, President, Science and Environmental Policy Project
Re-Blogged From WUWT
Number of the Week: 25 to 100 times greater
Disruptive Wind: The electrical grid operators provide reliable electricity with narrow tolerances. Generally, grid operators plan that power sources can be shut down for maintenance, usually in the spring and the fall. To keep costs down, grid operators desire to have maximum operating capacity in the summer (cooling) and in the winter (heating). According to the EIA’s description of electricity generating capacity:
To ensure a steady supply of electricity to consumers, operators of the electric power system, or grid, call on electric power plants to produce and place the right amount of electricity on the grid at every moment to instantaneously meet and balance electricity demand.
- In general, power plants do not generate electricity at their full capacities at every hour of the day. Three major types of generating units vary by intended usage:
- Base load generating units normally supply all or part of the minimum, or base, demand (load) on the electric power grid. A base load generating unit runs continuously, producing electricity at a nearly constant rate throughout most of the day. Nuclear power plants generally operate as base load service, because of their low fuel costs and the technical limitations on load responsive operation. Geothermal and biomass units are also often operated in base load because of their low fuel costs. Many of the large hydro facilities, several coal plants, and an increasing number of natural gas-fired generators, particularly those in combined power applications, also supply base load power.
- Peak load generating units help to meet electricity demand when demand is at its highest, or peak, such as in late afternoon when electricity use for air conditioning and heating increases during hot weather and cold weather respectively. These so-called peaking units are generally natural gas [turbine only] or petroleum fueled generators. In general, these generators are relatively inefficient and are costly to operate, but provide high- value service during peak demand periods. In some cases, pumped storage hydropower and conventional hydropower units also support grid operations by providing power during periods of peak demand.
- Intermediate load generating units comprise the largest generating sector and provide load responsive operation between base load and peaking service. The demand profile varies over time and intermediate sources are in general technically and economically suited for following changes in load. Many energy sources and technologies are used in intermediate operation. Natural gas-fired combined cycle units, which currently provide more generation than any other technology, generally operate as intermediate sources.
Additional categories of electricity generators include
- Intermittent renewable resource generators powered by wind and solar energy that generate electricity only when these resources are available (i.e., when it’s windy or sunny). When these generators are operating, they tend to reduce the amount of electricity required from other generators to supply the electric power grid.
- Electricity storage systems/facilities, including hydroelectric pumped storage, solar-thermal storage, batteries, flywheels, and compressed air systems. These systems typically use (or purchase) and store electricity that is generated during off-peak electricity demand periods (when electricity prices are relatively low), and they provide (or sell) the stored electricity during periods of high or peak electricity demand (when electricity prices are relatively high). Some facilities use electricity produced with intermittent renewable energy sources (wind and solar) when the renewable resource availability is high and provide the stored electricity when the renewable energy resource is low or unavailable. Non-hydro storage systems can also provide ancillary services to the electric power grid. Energy storage applications inherently use more electricity than they provide. Pumped-storage hydro systems use more electricity to pump water to water storage reservoirs than they produce with the stored water, and non-hydro storage systems have energy conversion and storage losses. Therefore, electricity storage facilities have net negative electricity generation balances. Gross generation provides a better indicator about the activity level of storage technologies and is provided in the data releases of the EIA-923 Power Plant Operations Report. [TWTW knows of no hydro-storage facility that operates successfully with solar or wind, only.]
- Distributed generators are connected to the electricity grid, but they primarily supply some or all of the electricity demand of individual buildings or facilities. Sometimes, these systems may generate more electricity than the facility consumes, in which case the surplus electricity is sent to the grid. Most small-scale solar photovoltaic systems are distributed generators.
- At the end of 2019, the United States had about 1,100,546 MW—or 1.1 billion kilowatts (kW)—of total utility-scale electricity generating capacity and about 23 million kW of small-scale solar photovoltaic electricity generating capacity.
Generating units fueled primarily with natural gas account for the largest share of utility-scale electricity generating capacity in the United States.
According to the report: the shares of utility-scale electricity generation capacity by primary energy source in 2018 were:
Natural gas: 43%; Coal 21%; Renewables 24%; [14% nonhydroelectric and 9% hydroelectric, perhaps incorrectly, the EIA considers all hydroelectric seasonal], Nuclear 9%; Petroleum 3%; Other 0.5%.
It is the nonhydroelectric renewables that have become a political fad and a growth industry.
“Wind energy’s share of total utility-scale electricity generating capacity in the United States grew from 0.2% in 1990 to about 9% in 2019, and its share of total annual utility-scale electricity generation grew from less than 1% in 1990 to about 7% in 2019.”
“Solar energy’s share of total U.S. utility-scale electricity generation in 2019 was about 1.8%, up from less than 0.1% in 1990.” The above is from: https://www.eia.gov/energyexplained/electricity/electricity-in-the-us-generation-capacity-and-sales.php
It is important to note that when dealing with capacity factors as stated, the calculations are the percentage the nameplate capacity of the wind turbines and solar units involved compared to what is actually delivered over a period of several years. Not only when winds are at optimal velocities of the sun is shining brightly overhead. Also, the efficiency is not that of a particular turbine to capture the wind, or the maximum efficiency of a solar unit.
Both wind and solar are nondispatchable, unreliable, and must be backed-up by other forms of generation. The question remains how unreliable are they? The EIA estimates the capacity factor for offshore wind is 44% and onshore wind is 40% — about half of combined cycle natural gas (87%), advanced coal (85%) and advanced nuclear (90%). But the simple number of 44% for offshore wind disguises what is occurring. What is the required backup to wind assuming a 99.9% reliability is demanded?
Analyst Paul Homewood discusses a web site, EnergyNumbers.info., which provides much needed information on how reliably individual offshore wind farms provide electricity to the UK, Denmark, Belgium, and Germany. For each country, these data are shown graphically as well. For the UK and Denmark, the separate sites gives the combined value of all the wind farms serving that country. The sources of data are official sources, such as Ofgem and Elexon for the UK. Ofgem is the government regulator for the electricity and gas markets in Great Britain. Elexon administers the balancing and settlement process for the entire wholesale electricity sector in Great Britain.
The web site, energynumbers.info, appears to be run by an advocate of wind power. According to the web site, the UK installed nameplate capacity ending in 2019 is 8,542 MW and the rolling 12-month capacity factor 40.6%. Homewood found the load duration curves particularly interesting and found. He states:
“It shows the time distribution of capacity loads, both for individual wind farms and overall.
So, for instance, the load factor was 36.3% or more for 50% of the time, ie the median. (This arguably is a more important measure than the average load).
The curve for all windfarms is for the last five years.
If we look at extremes, we find that load is below 20% for 31% of the time, in other words below half of the average.
At the other end, output is above 80% for 12% of the time.
In other words, loading is either extremely high or extremely low for 43% of the year. This gives the lie to claims that wind power is reliable most of the time, and that output is smoothed because of the widespread geographic distribution – in other words, that the wind always blows somewhere!
In particular, it is commonly claimed that winds at sea are much less volatile than over land.”
TWTW looked further and found that over the data period, for 3% of the time the capacity factor was 89.8% or higher (about that of nuclear energy); for 36% of the time the capacity factor was 49.9% or higher; for 50% of the time, 36% or higher; and for 90% of the time, 6.4% or higher. At the extreme low end, for 99% of the time the capacity factor was 0.5% or higher.
Source: Andrew ZP Smith, ORCID: 0000-0003-3289-2237; “UK offshore wind capacity factors”. Retrieved from https://energynumbers.info/uk-offshore-wind-capacity-factors on 2020-05-22 16:36 GMT
The numbers for Denmark are similar
Although these numbers come largely from operations in the North Sea or the Irish Sea, unless contrary evidence is given, one can expect similar unreliable performance for other offshore wind power facilities. Thus, grid operators who desires to deliver reliable power 99.9% of the time need backup power available equal to that of the total offshore wind facilities. The costs of this backup should be part of the estimate for costs of offshore wind. It is not.
Further, those promoting offshore wind power fail to discuss unreliability. In lending, those who provide evidence that they will repay the loan on time are considered reliable, prime borrowers. Those who provide no such evidence are considered subprime. Clearly, offshore wind power is not reliable – subprime – and the promoters should be considered similar to promoters of subprime loans. See links under Energy Issues – Non-US, Energy Issues – US, and Alternative, Green (“Clean”) Energy – Storage.
Needed Questions: On her blog, Jo Nova states three questions that reporters should have asked when interviewing proponents of solar power concerning Australia:
“1. Is there any country around the world which has a high penetration of intermittent renewables and cheap electricity? Name them…
2. If renewables are so cheap, why is China secretly building more coal power plants?
3. Australian electricity wholesale costs were around $30 per MWh for years on the national grid, then we added 2 million solar panels. Shouldn’t the prices have gone down?”
The same questions apply to wind power. See links under Energy Issues – Australia and Alternative, Green (“Clean”) Solar and Wind.
Modeling for Public Policy Issues: The Right Climate Stuff team insisted that modeling used in public policy should be well tested against all physical evidence available, corrections made when needed, and that the best physical evidence available be used. The assumptions must be transparent. John Robson presents what he calls the iron law of modeling regarding climate models and the failure of modelers to present the findings that the climate is not warming alarmingly:
“The upper-end models of temperature increase, and even of the increase in CO2 meant to cause runaway temperature increases, are the ones that frighten policymakers into demanding ever-more stringent policies in the hope of getting us down to the low end of CO2 accumulation forecasts, and in consequence bring more grant money to the modelers to give the politicians more of what they want. It’s a closed circle impenetrable even to data that is readily available to the participants.
In consequence, the modelers never get around to telling policy makers that we are already at the lower bound, and all indications are that we are going to remain there, so the extra stringency isn’t needed. The problem is not in our computers but in our modelers.”
Similarly, Kevin Dayaratna discusses the failure of the COVID-19 model by the Imperial College used to justify lockdowns, when the evidence is now clearly showing that the age groups most at risk are those 65 or older and lockdowns are unnecessary for younger age groups. He states:
“This isn’t the first-time bad models have made their way into policy. As we discussed in our work, statistical models can be useful tools for guiding policy, but they are only as credible as the assumptions on which they are based.
“It is fundamentally important for models used in policy to be made publicly available, have assumptions clearly stated, and have their robustness to changes to these assumptions tested. Models also need to be updated as time goes on in line with the best available evidence.
“Bottom line: The Imperial College model didn’t meet any of these criteria. And sadly, its model was one of the inputs relied on as the basis for locking down two countries.” See links under Model Issues.
Transparency: Given the COVID-19 models, it is ironic that the EPA just closed the comment period on a new rule requiring transparency in the science used for public policy. Many organizations objected to the new rule, including ones using the word science in their names as well as a group of 100 law professors who called the proposal unlawful. Apparently, secrecy is vital for science to advance and transparency is unlawful. The comments by SEPP included:
Transparency is critical in applying the scientific method to incorporate new data and concepts in scientific understanding and to remove errors of the past. The scientific method requires constant testing against physical evidence as that evidence is being compiled.
As those who follow the COVID-19 controversy may realize, to create models that give realistic results requires both a solid, well tested model and solid, realistic data fitting the issue. No matter how good the model, if the data are inappropriate, the results are poor. A critical question is: How good are the numbers (measurements) in defining the issue?
After examples using data from various countries the comment continued:
No matter how good an infection model may be, using data from China would not be appropriate for the US. Yet, all too frequently modelers use inappropriate data and produce inappropriate results they claim to be meaningful. Such errors in use of data or models should not be tolerated in regulatory science any more than in medical research. Complete transparency is needed.
Some argue that personal medical records may be revealed if transparency becomes the norm. However, privacy can be protected. For example, The US Centers for Disease Control and Prevention reports “Provisional COVID-19 Death Counts by Sex, Age, and State.” Nothing is revealed that can be traced to individual personal medical records.
These data are valuable in adjusting policy to meet current needs. To protect the health of Americans, The Science and Environmental Policy Project urges the Environmental Protection Agency to adopt a policy of complete transparency in rule making, and adjust rules as evidence changes, which it will in a more complex world.”
See links under EPA and other Regulators on the March.
Additions and Corrections: Last week TWTW erroneously gave the chemical formula of Nitrous Oxide as NO2, it is actually N2O.
Number of the Week: 25 to 100 times greater: According to NOAA’s Global Monitoring Laboratory, the concentration of CO2 in the atmosphere at Mauna Loa Observatory is about 410 parts per million (ppm). By contrast, water vapor varies by region, but is about 1 to 4% of the atmosphere (10,000 to 40,000 ppm). Thus, the concentration of water vapor is about 25 to 100 times greater than the secondary greenhouse gas, carbon dioxide.
In the 1979 Charney Report, water vapor was considered to be the most important greenhouse gas, greatly amplifying the modest influence of carbon dioxide, and greatly increasing the greenhouse effect.
NOAA measures water vapor at various altitudes at Boulder, Colorado, Lauder, New Zealand, and Hilo, Hawaii. Yet, in its annual discussion the Annual Greenhouse Gas Index (AGGI) in Spring 2020, NOAA presenters bring up Carbon Dioxide (CO2), Nitrous Oxide (N2O), Methane (CH4) and 15 minor halogenated gases. It is as if water vapor no longer exists as a greenhouse gas.
According to Remote Sensing Systems, a competitor of University of Alabama in Huntsville (UAH) in presenting satellite based atmospheric temperature trends:
“Over 99% of the atmospheric moisture is in the form of water vapor, and this vapor is the principal source of the atmospheric energy that drives the development of weather systems on short time scales and influences the climate on longer time scales.
“Water vapor is a critical component of Earth’s climate systems. It is the Earth’s primary greenhouse gas, trapping more heat than carbon dioxide. Movement of water vapor, and its associated latent heat of vaporization, is also responsible for about 50% of the transport of heat from the tropics to the poles. The movement of water vapor is also important for determining the amount of precipitation a region receives.”
Based on their web sites, it appears that no US government laboratories are monitoring and reporting changes in atmospheric water vapor. Thus, they ignore the most important greenhouse gas, which keeps the landmasses from freezing at night.
1. Cut Through the Fog of Coronavirus War
The CDC needs to streamline and publish clinical data to help doctors on the front lines.
By Scott Gottlieb, WSJ, May 17, 2020
TWTW Summary: The resident fellow at the American Enterprise Institute and former commissioner of the Food and Drug Administration states:
“The Centers for Disease Control and Prevention made its first definitive statement last week describing a rare but disturbing condition in children related to Covid-19. Doctors in the U.K. first reported in April a spike in previously healthy children presenting with features similar to another rare condition, Kawasaki disease, whose symptoms include rash and fever and, later in its progression, inflammation of blood vessels.
“This is a reminder of how much we don’t know about Covid-19. We’ve learned a lot over the past two months as Covid-19 became an epidemic, with 1.5 million Americans diagnosed and more than 90,000 dead. New insights have translated into improved care. This knowledge is saving lives and will be especially useful if infections flare up again.
“Yet such data on patients isn’t being streamlined and shared with the public quickly. There are shortcomings in our ability to access the electronic systems designed to help glean facts from clinical data. CDC hasn’t been filling its traditional role of promptly publishing medical findings that may help doctors care for patients. Instead, a lot of this information is being passed around social media, by email or even through word of mouth. It’s trial and error on a global scale.”
After giving specific examples of patients not receiving needed care, the doctor continues:
“These findings have come in the setting of an epidemic that has overwhelmed health-care systems. Doctors who usually conduct careful clinical research are battling to preserve lives while risking their own health. Much of the information has been passed along in short research notes, or even on Twitter. A little of it has flowed from the CDC. But to date there’s been no systematic reporting from CDC on collected clinical experience, even with hundreds of thousands of American patients hospitalized, tens of thousands of dead, and many more suffering.
“Some serious efforts are under way. The Food and Drug Administration is trying to use electronic health-reporting systems and real-world evidence to derive insights on experience with patients and how different drug interventions may be helping or hurting. FDA is using innovative methods to allow clinicians to analyze their own records and merge the results to try to answer questions about, say, the right moment to intubate a patient or give a drug.
“But CDC and its highly capable career experts must be elevated to play their role in reporting on these findings in real time, so medical practice can be quickly informed of the latest information about Covid patients. Whatever the reasons, CDC has spoken infrequently and with more reticence than is customary in public-health crises. Policy makers may worry that prescriptive guidance and descriptive clinical findings will fuel public fears or constrain a reopening.
“The opposite is true. The more information about how to reduce the risk of spread and the severity of sickness, the more lives that can be saved, and the more comfortable Americans will feel about starting to resume normal life.
2. It’s Deadly to Fear the Emergency Room
‘Shelter in place’ doesn’t apply if you’re having a heart attack.
By Yves Duroseau, WSJ, May 19, 2020
TWTW Summary: The Chairman of emergency medicine at Lenox Hill Hospital writes:
“When my hospital discharged its 1,000th Covid-19 patient, it was cause for celebration—a testament to the great work done by selfless health-care workers during this difficult time. Yet that same day, I walked around our emergency room and noticed that it had only about half the volume of patients we normally see on a Thursday. Where did all our patients go?
“It is a question shared by many emergency departments in New York City. At Lenox Hill we’ve seen the number of patients complaining of chest pain drop by nearly a quarter, as well as a 39% decrease in patients diagnosed with an acute stroke. Sadly this doesn’t mean New Yorkers are getting healthier. The Centers for Disease Control and Prevention reports that between March 11 and May 2, the city had 5,293 excess deaths not identified as confirmed or probably associated with Covid-19. Excess mortality means deaths beyond what would normally be expected for that period, based on historical data, suggesting that New Yorkers are dying at an alarming rate from diseases that don’t necessarily have much to do with the virus.
“You hardly have to be a doctor to come up with a hypothesis for why New Yorkers are choosing to stay out of emergency rooms. Patients are afraid that going to the ER will put them at a greater risk of contracting the coronavirus.
“The other day, for example, we were contacted by a man suffering from daylong, intense abdominal pain. After some effort, we finally convinced him to present to our ER, where we diagnosed and treated his early appendicitis. He went home the same day. Had he let anxiety get the better of him and stayed home, he likely would have suffered a much worse case of appendicitis—potentially fatal.
“Such cases unfold every day, and many don’t have a happy ending. Committed to sheltering in place, serious about social distancing, and fearful of contagion, too many people avoid seeking medical care. This is a public-health disaster, one we rarely discuss, even though it claims many lives, particularly of the elderly, immigrants, minorities and other vulnerable communities.
Recognizing that this crisis is serious, he concludes:
“Our emergency rooms are safe. Staying away when you need care is dangerous.”