Why You Shouldn’t Draw Trend Lines on Graphs

By Kip Hansen – Re-Blogged From WUWT

featured_image_linesWhat we call a graph is more properly referred to as “a graphical representation of data.”  One very common form of graphical representation is “a diagram showing the relation between variable quantities, typically of two variables, each measured along one of a pair of axes at right angles.”

Here at WUWT we see a lot of graphs —  all sorts of graphs of a lot of different data sets.  Here is a commonly shown graph offered by NOAA taken from a piece at Climate.gov called “Did global warming stop in 1998?” by Rebecca Lindsey published on September 4, 2018.

agw_propagsnda

I am not interested in the details of this graphic representation — the whole thing qualifies as “silliness”.  The vertical scale is in degrees Fahrenheit and the entire range change over 140 years shown is on the scale 2.5 °F or about a degree and a half C.   The interesting thing about the graph is the effort of drawing of “trend lines” on top of the data to convey to the reader something about the data that the author of the graphic representation wants to communicate.  This “something” is an opinion — it is always an opinion — it is not part of the data.

The data is the data.  Turning the data into a graphical representation (all right, I’ll just use “graph” from here on….), making the data into a graph has already  injected opinion and personal judgement into the data through choice of start and end dates, vertical and horizontal scales and, in this case, the shading of a 15-year period at one end.  Sometimes the decisions as to vertical and horizontal scale are made by software — not rational humans —  causing even further confusion and sometimes gross misrepresentation.

Anyone who cannot see the data clearly in the top graph without the aid of the red trend line should find another field of study (or see their optometrist).  The bottom graph has been turned into a propaganda statement by the addition of five opinions in the form of mini-trend lines.

Trend lines do not change the data — they can only change the perception of the data.  Trends can be useful at times [ add a big maybe here, please ] but they do  nothing for the graphs above from NOAA other than attempt to denigrate the IPCC-sanctioned idea of “The Pause”, reinforcing the desired opinion of the author and her editors at Climate.gov (who, you will notice from the date of publication, are still hard at it hammer-and-tongs, promoting climate alarm). To give Rebecca Lindsey a tiniest bit of credit, she does write “How much slower [ the rise was ] depends on the fine print: which global temperature dataset you look at”….   She certainly has that right.  Here is Spencer’s UAH global average lower tropospheric temperature:

spencrs_pause

One doesn’t need any trend lines to be able to see The Pause that runs from the aftermath of the 1998 Super El Niño to the advent of the 2015-2016 El Niño.  This illustrates two issues:  Drawing trend lines on graphs is adding information that is not part of the data set and it really is important to know that for any scientific concept, there is more than one set of data — more than one measurement — and it is critically important to know “What Are they Really Counting?”, the central point of which is:

So, for all measurements offered to us as information especially if accompanied by a claimed significance – when we are told that this measurement/number means this-or-that — we have the same essential question: What exactly are they really counting?

Naturally, there is a corollary question: Is the thing they counted really a measure of the thing being reported?

I recently came across an example in another field of just how intellectually dangerous the cognitive dependence (almost an addiction) on trend lines can be for scientific research.  Remember, trend lines on modern graphs are often being calculated and drawn by statistical software packages and the output of those packages are far too often taken to be some sort of revealed truth.

I have no desire to get into any controversy about the actual subject matter of the paper that produced the following graphs.  I have abbreviated the diagnosed condition on the graphs to gently disguise it.  Try to stay with me and focus not on the medical issue but on the way in which trend lines have affected the conclusions of the researchers.

Here’s the big data graph set from the supplemental information for the paper:

Note that these are graphs of Incidence Rates which can be considered “how many cases of this disease are reported per 100,000 population?”, here grouped by 10-year Age Groups.  They have added colored trend lines where they think (opinion) significant changes have occurred in incident rates.

age-specific_incidence_men

[ Some important details, discussed further on, can be seen on the  FULL-SIZED image, which opens in a new tab or window. ]

IMPORTANT NOTE:  The condition being studied in this paper is not something that is seasonal or annual, like flu epidemics.  It is a condition that develops, in  most cases,  for years before being discovered and reported, sometimes only being discovered when it becomes debilitating.  It can also be discovered and reported through regular medical screening which normally is done only in older people.  So “annual incidence” may not a proper description of what has been measured — it is actually a measure of “annual cases discovered and reported’ — not actually incidence which is quite a different thing.

The published paper uses a condensed version the graphs:

Incidence_Trends

The older men and women are shown in the top panels, thankfully with incidence rates declining from the 1980s to the present.  However, as considerately reinforced by the addition of colored trend lines, the incident rates in men and women younger than 50 years are rising rather steeply.  Based on this (and a lot of other considerations), the researchers draw this conclusion:

Conclusions

Again, I have no particular opinion on the medical issues involved…they may be right for reasons not apparent.  But here’s the point I hope to communicate:

Confused_by_Trendlines

I annotate the two panels concerning incidence rates in Men older than 50 and Men younger than  50.   Over the 45 years of data, the rate in men older than 50 runs in a range of 170 to 220 cases reported per year, varying over a 50 cases/year band.   For Men < 50, incidence rates have been very steady from 8.5 to 11 cases per year per 100,000 population for 40 years, and only recently, the last four data points, risen to 12 and 13 cases per 100,000 per year — an increase of one or two cases [per 100,000 population per year. It may be the trend line alone that creates a sense of significance. For Men > 50, between 1970 and the early 1980s, there was an increase of 60 cases per 100,000 population.  Yet, for Men < 50, the increased discovery and reporting of an additional one or two cases per 100,000  is concluded to be a matter of “highest priority” —  however, in reality, it may or may not actually be significant in a public health sense —  and it may well be within the normal variance in discovery and reporting of this type of disease.

The range of incidence among Men < 50 remained the same from the late 1970s to the early 2010s —  that’s pretty stable.  Then there are four slightly higher outliers in a row — with increases 1 or 2 cases per 100,000.   That’s the data.

If it were my data — and my topic — say number of Monarch butterflies visiting my garden annually by month or something, I would notice from the panel of seven graphs further above, that the trend lines confuse the issues.   Here it is again:

age-specific_incidence_men[ full-sized image in new tab/window]

If we try to ignore the trend lines, we can see in the first panel 20-29y incidence rates are the same in the current decade as they were in the 1970s — there is no change. The range represented in this panel, from lowest to highest data point, is less than 1.5 cases/year.

Skipping one panel, looking at 40-49y, we see the range has maybe dropped a bit but the entire magnitude range is less than 5 cases/100,000/year.  In this age-group, there is a trend line drawn which shows an increase over the last 12-13 years, but the range is currently lower than in the 1970s.

In the remaining four panels, we see “hump shaped” data, which over the 50 years, remains in the same range within each age-group.

It is important to remember that this is not an illness or disease for which a cause is known or for which there is a method of prevention, although there is a treatment if the condition is discovered early enough.   It is a class of cancers and incidence is not controlled by public health actions to prevent the disease.  Public health actions are not causing the change in incidence.  It is known to be age-related and occurs increasingly often in men and women as they age.

It is the one panel, 30-39y , that shows an increase in incidence of just over 2 Cases/100,000/year that is the controlling factor that pushes the Men < 50 graph to show this increase.  (It may be the 40-49y panel having the same effect.) (again, repeating the image to save readers scrolling up the page):

age-specific_incidence_men

Recall that the Conclusion and Relevance section of the paper called this “This increase in incidence among a low-risk population calls for additional research on possible risk factors that may be affecting these younger cohorts. It appears that primary prevention should be the highest priority to reduce the number of younger adults developing CRC in the future.”

This essay is not about the incidence of this class of cancer among various age groups — it is about how having statistical software packages draw trend lines on top of your data can lead to confusion and possibly misunderstandings of the data itself.   I will admit that it is also possible to draw trend lines on top of one’s data for rhetorical reasons [ “expressed in terms intended to persuade or impress” ], as in our Climate.gov example (and millions of other examples in all fields of science).

In this medical case, there are additional findings and reasoning behind the researchers conclusions — none of which change the basic point of this essay about statistical packages discovering and drawing trend lines over the top of data on graphs.

Bottom Lines:

  1. Trend lines are NOT part of the data. The data is the data.
  1. Trend lines are always opinions and interpretations added to the data and depend on the definition (model, statistical formula, software package, whatever) one is using for “trend”. These opinions and interpretations can be valid, invalid, or nonsensical (and everything in between)
  1. Trend lines are NOT evidence — the data can be evidence, but not necessarily evidence of what it is claimed to be evidence for. 
  1. Trends are not causes, they are effects. Past trends did not cause the present data.  Present data trends will not cause future data.   
  1. If your data needs to be run through a statistical software package to determine a “trend” — then I would suggest that you need to do more or different research on your topic or that your data is so noisy or random that trend maybe irrelevant.
  1. Assigning “significance” to calculated trends based on P-value is statistically invalid.
  1. Don’t draw trend lines on graphs of your data. If your data is valid, to the best of your knowledge, it does not need trend lines to “explain” it to others. 

CONTINUE READING –>

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s