Sea Level and Effective N

By Willis Eschenbach – Re-Blogged From WUWT

Over in the Tweeterverse, I said I wasn’t a denier, and I challenged folks to point out what they think I deny. Brandon R. Gates took up the challenge by claiming that I denied that sea level rise is accelerating. I replied:

Brandon, IF such acceleration exists it is meaninglessly tiny. I can’t find any statistically significant evidence that it is real. HOWEVER, I don’t “deny” a damn thing. I just disagree about the statistics.

Brandon replied:

> IF such acceleration exists

It’s there, Willis. And you’ve been shown.

> it is meaninglessly tiny

When you have a better model for how climate works, then you can talk to me about relative magnitudes of effects.

[As a digression, I’m one of the few folks with a better model, supported by a number of observations, for how the climate works. I say that the long-term global temperature is regulated by emergent phenomena to within a very narrow range (e.g. ± 0.3°C over the entire 20th Century). And I have no clue what that has to do with whether or not I can talk to him about “relative magnitude of effects”.

But as I said … I digress … ]

Brandon accompanied his unsupported claims with the following graph of air temperature, not sure why … my guess is that he grabbed it in haste and mistook it for sea level. Easy enough to do, I’ve done worse.

Now, I’ve written recently about sea level rise in a post called “Inside The Acceleration Factory”. However, there is a deeper problem with the claims about sea levels. This is that the sea level data is very highly autocorrelated.

“Autocorrelated” in respect to a time series, like say sea level or temperature, means the present is correlated with the past. In other words, autocorrelation means that hot days are more likely to be followed by hot days, and cold days to be followed by cold days, than hot days following cold or vice versa. And the same is true of hot and cold months, or hot and cold years. When such autocorrelation extends over long time periods, years or decades, it is often called “long term persistence”, or “LTP”.

And trends are very common among datasets that exhibit LTP. Another way of putting this is expressed in the name of a 2005 article in Nature magazine. The article was called “Nature’s Style: Naturally trendy”. This is quite accurate. Natural datasets tend to contain trends of various lengths and strengths, due to the existence of long term persistence (LTP).

And this long-term persistence (LTP) brings up large problems when you are trying to determine whether the trend of a given climate time series is statistically significant or not. To elucidate this, let me discuss the Abstract of “Naturally Trendy”. I’ll put their abstract in bold italics. It starts as follows:

Hydroclimatological time series often exhibit trends.

True. Time series of river flow, rainfall, temperature, and the like have trends.

While trend magnitude can be determined with little ambiguity, the corresponding statistical significance, sometimes cited to bolster scientific and political argument, is less certain because significance depends critically on the null hypothesis which in turn reflects subjective notions about what one expects to see.

Let me break that down a bit. Over any given time interval, every weather-related time series, whether it is temperature, rainfall, or any other variable, is in one of two states.

Going up, or

Going down.

So the relevant question for a given weather dataset is never “is there a trend”. There is, and we can measure the size of the trend.

Here’s the relevant question; is a given trend an UNUSUAL trend, or is it just a natural fluctuation?

Now, we humanoids have invented an entire branch of math called “statistics” to answer this very question. We’re gamblers, and we want to know the odds.

It turns out, however, that the question of an unusual trend is slightly more complicated. The real question is, is the trend UNUSUAL compared to what?

Plain old bog-standard statistical mathematics answers the following question—is the trend UNUSUAL compared to totally random data? And that is a very useful question. It is also very accurate for truly random things like throwing dice. If I pick up a cup containing ten dice, and I turn it over and I get ten threes, I’ll bet big money that the dice are loaded.

HOWEVER, and it’s a big however, what about when the question is, is a given trend unusual, not compared to a random time series, but compared to random autocorrelated time series? And particularly, is a given trend unusual compared to a time series with long-term persistence (LTP)? Their Abstract continues:

We consider statistical trend tests of hydroclimatological data in the presence of long-term persistence (LTP).

They are taking a variety of trend tests to see how well they perform with random datasets which exhibit LTP.

Monte Carlo experiments employing FARIMA models indicate that trend tests which fail to consider LTP greatly overstate the statistical significance of observed trends when LTP is present.

In simplest terms, regular statistical tests that don’t consider LTP falsely indicate significant trends when the trends are in fact just natural variations. Or to quote from the body of the paper,

More important, as Mandelbrot and Wallis [1969b, pp. 230 –231] observed, ‘‘[a] perceptually striking characteristic of fractional noises is that their sample functions exhibit an astonishing wealth of ‘features’ of every kind, including trends and cyclic swings of various frequencies.’’ It is easy to imagine that LTP could be mistaken for trend.

This is a very important observation. “Fractional noise”, meaning noise with LTP, contains a variety of trends and cycles which are natural and interent in the noise. But these trends and cycles don’t mean anything. They appear, have a duration, and disappear. They are not fixed cycles or permanent trends. They are a result of the LTP, and are not externally driven. Nor are they diagnostic—the presence of what appears to be a twenty-year cycle cannot be assumed to be a constant feature of the data, nor can it be used as a means to predict the future. It may just be part of Mandelbrot’s “astonishing wealth of features”.

The most common way to deal with the issue of LTP is to use what is called an “effective N”. In statistics, “N” represents the number of data points. So if we have say ten years of monthly data, that’s 120 months, so N equals 120. In general, the more data points you have, the stronger the statistical conclusions … but when there is LTP the tests “greatly overstate the statistical significance”. And by “greatly”, as the paper points out, using regular statistical methods can easily overstate significance by some 25 orders of magnitude

A common way to fix that problem is to calculate the significance as though there were actually a much smaller number of data points, a smaller “effective N”. That makes the regular statistical tests work again.

Now, I use the method of Koutsoyiannis to determine the “effective N”, for a few reasons.

First, it is mathematically derivable from known principles.

Next, it depends on the exact measured persistence characteristics, both long and short term, of the dataset being analyzed.

Next, as discussed in the link just above, I independently discovered and tested the method in my own research, only to find out that …

… the method actually was first described by Demetris Koutsoyiannis, a scientist for whom I’ve always had the greatest respect. He’s cited several times in the “Naturally Trendy” paper. So I was stoked when he commented on my post that he was the originator of the method, because that meant I actually did understand the subject.

With all of that as prologue, let me return to the question of sea level rise. There are a few reconstructions of sea level rise. The main ones are by Jevrejeva, and by Church and White, and also the satellite TOPEX/JASON data. Here’s a graph from the previous post mentioned above, showing the Church and White tide station data.

Now, I pointed out in my other post how it is … curious … that starting at exactly the same time as the satellite record started in 1992, the trend in the Church and White tide gauge data more than doubled.

And while that change in trend is worrisome in and of itself, there’s a deeper problem. The aforementioned “effective N” is a function of what is called the “Hurst Exponent”. The Hurst Exponent is a number between zero and plus one that indicates the amount of long-term persistence. A value of one-half means no long-term persistence. Hurst exponents from zero to one half show negative long-term persistence (hot followed by cold etc.), and values above one half indicate the existence of long-term persistence (hot followed by hot etc.). The nearer the Hurst Exponent is to one, the more LTP the dataset exhibits.

And what is the Hurst Exponent of the Church and White data shown above? Well, it’s 0.93, way up near one … a very, very high value. I ascribe this in part to the fact that any global reconstruction is the average of hundreds and hundreds of individual tide records. When you do large-scale averaging it can amplify long-term persistence in the resulting dataset.

And what is the effective N, the effective number of data points, of the Church and White data? Let’s start with “N”, which is the actual number of data points (months in this case). In the C&W sea level data the number of datapoints N is 1608 months of data.

Next, effective N (usually indicated as “Neff”) is equal to:

N, number of datapoints, to the power of ( 2 * (1 – Hurst Exponent) )

And 2 * (1-Hurst Exponent) is 0.137. So:

Effective N “Neff” = N ^ (2 * (1 – Hurst Exponent))

= 1608 ^ 0.137

= 2.74

In other words, the Church and White data has so much long-term persistence that effectively, it acts like there are only three data points.

Now, are those three data points enough to establish the existence of a trend in the sea level data? Well, almost, but not quite. With an effective N of three, the p-value of the trend in the Church and White data is 0.07. This is just above what in the climate sciences is considered statistically significant, which is a p-value less than 0.05. And if the effective N were four instead of three, it would indeed be statistically significant at a p-value less than 0.05.

 

However, if you only have three data points, that’s not enough to even look to see if the results are improved by adding an acceleration term to the equation. The problem is that with an additional variable, that’s three tunable parameters for the least squares calculation and only three data points. That means there are zero degrees of freedom … no workee.

So … do I “deny” that sea levels are accelerating in some significant manner?

Heck, no. I deny nothing. Instead, I say we don’t have the data we’d need to determine if sea level is accelerating.

Is there a solution to the problem of LTP in datasets? Well, yes and no. There are indeed solutions, but the climate science community seems determined to ignore them. I can only assume that this is because many claims of significant results would have to be retracted if the statistical significance were to be calculated correctly. Once again, from the paper “Nature’s Style: Naturally Trendy”:

In any case, powerful trend tests are available that can accommodate LTP.

It is therefore surprising that nearly every assessment of trend significance in geophysical variables published during the past few decades has failed to account properly for long-term persistence.

Suprising indeed, particularly bearing in mind that the “Naturally Trendy” paper was published 14 years ago … and the situation has not gotten better since then. LTP is still rarely accounted for properly.

To return to the paper, the authors say:

These findings have implications for both science and public policy.

For example, with respect to temperature data there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and non-linearity of the climate system, it seems the answer might be yes.

I’m sure that you can see the problems that such statistical honesty would cause for far too much of mainstream climate science …

Best regards to all, including to Brandon R. Gates, whose claim inspired this post,

CONTINUE READING –>

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s