By Richard D. Patton – Re-Blogged FROM WUWT
When examining penetrations of wind power, there seem to be two types of papers. In one type, wind penetration up to 20% is difficult but doable, and in the other type wind penetrations of 50-60% are quite easy. In fact, renewable penetrations of up to 90%-100% are claimed.
As an example of the first type, consider the Western Wind and Solar Integration Study performed by GE under contract to NREL (National Renewable Energy Lab). One of their conclusions is the following :
“There appears to be minimal stress on system operations at up to 20% wind penetration. Beyond that point, the system’s operational flexibility is stretched, particularly if the rest of WECC is also aggressively pursuing renewables.”
The GE/NREL paper invites skepticism because they freely admit that their reserves are stretched and that interconnections provide extra reserves. They take up a 20/20 case – 20% wind in both the area and its interconnections. They also examine the case of 30% wind in an area, but not the 30/30 case, where both the area and its interconnections have 30% wind penetration. This is the actual case, if wind is to supply 30% everywhere. Thus, 20% is doable but 30% is much more doubtful. Note that the 30% case also had 5% penetration from solar energy, for a total of 35%.
For an example of a paper that shows high wind penetration, consider Examining the Feasibility of Converting New York State’s All-purpose Energy Infrastructure to One Using Wind, Water and Sunlight . This paper has 50% wind power, and its only sources of dispatchable power are hydroelectric and concentrating solar power (CSP), and these only amount to 15.5% of the power. There is no fossil fuel energy in the system anywhere, and there is no biomass energy in the system.
The methodology for this paper is given in another paper: A Monte Carlo approach to generator portfolio planning and carbon emissions assessments of systems with large penetrations of variable renewables . This is the methodology used in the previous paper. It treats the hourly wind speeds as a random variable of the mean and standard deviation of the wind speed for that date, combined with a cyclic allowance for the fact that the wind is stronger at night than during the day. It does so to simplify the problem and reduce the number of variables, so the design becomes manageable. Let’s take a look at this Monte Carlo approach. The authors state:
“The Monte Carlo simulation relies on the assumption that meteorological processes, demand forecasts and forced outages can be approximated as Markov chain processes.”
The authors then build all of their analysis on this assumption. The question then becomes – can wind speed be approximated a Markov chain process? The reason for asking this question is that if you asked a Math or Statistics prof what the most common error students make, they will tell you it’s not verifying your assumptions. If your assumption is not true, all of your work is wasted.
In order to understand what might be wrong with the assumption, we turn to Introduction to Markov Chains by Anders Tolver . His first example is of a child being taken to daycare if he is well or staying home if he is ill. He achieves a series of well – ill states depending on whether his child went to daycare that day. Here is what the author has to say about that series:
“It is clear that many random processes from real life do not satisfy the assumption imposed by a Markov chain. When we want to guess whether a child will be ready for daycare tomorrow, it probably inﬂuences our prediction whether the child has only been ill today or whether it has been ill for the past 5 days. This suggests that a Markov chain might not be a reasonable mathematical model to describe the health state of a child.”
Random processes never look back, so that past cannot predict the future. In statistical terms, this means that the expected value of its autocorrelation is zero, when the time shift is non-zero. When the time shift is zero, the autocorrelation reduces to 1.0.
I tried several autocorrelations of NREL wind data. Tower M2 data was used because it was the most accessible. The autocorrelation for a 1-hour time shift for the wind measured hourly at the NREL M2 tower was 0.787 in the month of January 2018. In June the autocorrelation was 0.735. This is illustrative of the fact that wind speed does not behave as a random variable. Since it is not a random variable, Markov chain simulations are not valid.
The significance of this can be seen in the graph below. Both the orange line and the blue line have the same mean and standard deviation. The orange line represents a day in which there were strong winds which slowed down at midday and has an autocorrelation of .91. the blue line has an autocorrelation of -.076 (nearly zero).
Note that the random wind cannot model whether the wind is persistent or not because it has no memory. It is as likely to be high or low any time of the day.
When multiple random number series are added together, the sum converges to the average and the variance converges to zero. Here is an example.
As the number of independent random series added together increases, the deviations decrease. The lows become higher and the highs become lower. Translated to wind farms, this equates to the insistence that problems can be fixed by adding wind farms at different locations. If each wind farm is represented by a different random series, this will smooth at the result and make the system appear to be more reliable than it actually is. The effect of storms and calms will be lost. The intermittency is lost, because as more wind farms are added, the power output converges to a constant function. This is not true in practice.
This conclusion is not changed if the there is an effort by the analyst to add the correlation between the winds at different locations. The Monte Carlo simulation is still treating everything as a Markov chain. To repeat Dr. Tolver, “It is clear that many random processes from real life do not satisfy the assumption imposed by a Markov chain” . Wind is one of those random processes that do NOT satisfy the assumption imposed by a Markov chain.
Any paper such as those cited that uses this methodology is faulty and gives a very unrealistic view of the value of wind power. This is because the technique itself tends to smooth out the flow of power from wind so that it approaches a constant value as more wind farms are added. This is an artifact of faulty math and does not exist in the real world.
Mark Z. Jacobson, the lead author of the paper on making New York 100% renewable, is a professor of Civil and Environmental Engineering and director of the Atmosphere/Energy program at Stanford University. According to his CV :
“In 2009, he coauthored a plan, featured on the cover of Scientific American, to power the world for all purposes with wind, water, and sunlight (WWS). In 2010, he appeared in a TED debate rated as the sixth all-time science and technology TED talk. In 2011, he cofounded The Solutions Project, a non-profit that combines science, business, and culture to educate the public about science based 100% clean-energy roadmaps for 100% of the people. From 2011-2015, his group developed individual WWS energy plans for each of the 50 United States, and by 2017, for 139 countries of the world.
The individual state roadmaps were the primary scientific justifications for California and Hawaii laws to transition to 100% clean, renewable electricity by 2045, Vermont to transition to 75% by 2032, and New York to transition to 50% by 2030. They were also the primary scientific justifications behind United States House Resolution H.Res. 540, House Bill H.R. 3314, House Bill H.R. 3671, Senate Resolution S.Res. 632, Senate Bill S.987, and the platforms of three presidential candidates and a major political party in 2016, all calling for 100% clean, renewable energy in the U.S.”
It seems likely that all of the high penetration models are influenced by him and possess similar methodology. Since they are all faulty, at best it can be said that high wind penetrations might be possible. As of today, they are unproven and unlikely, given experiences in places like Germany and South Australia.
It also seems likely that the plans proposed and passed to reduce the carbon footprint in places like California and New York will fail badly and drive up utility bills. They are based on this faulty methodology and hence are very unlikely to succeed.