Opinion By Kip Hansen — Re-Blogged From WUWT
In Part 1 of this two-part series, I detailed how there has been a growing furor over the U.S. Environmental Protection Agency’s (E.P.A.’s) proposed “Strengthening Transparency in Regulatory Science” rule — most often referred to as the Secret Science rule. A majority of the expressed concern about the rule deals with the Harvard Six Cities Study — which is being defended by opposing the proposed E.P.A. rule. Here’s why:
This is a perfectly fine preliminary study of the topic. It has a major finding of :
“The adjusted mortality-rate ratio for the most polluted of the cities as compared with the least polluted was 1.26 (95 percent confidence interval, 1.08 to 1.47). Air pollution was positively associated with death from lung cancer and cardiopulmonary disease but not with death from other causes considered together. Mortality was most strongly associated with air pollution with fine particulates, including sulfates.”
“Although the effects of other, unmeasured risk factors cannot be excluded with certainty, these results suggest that fine-particulate air pollution, or a more complex pollution mixture associated with fine particulate matter, contributes to excess mortality in certain U.S. cities.”
The study had, in total, 8,111 subjects , all white — in six different cities — roughly 1300 subjects per city. Of these, there were 1429 deaths over the 14-16 years follow-up or about 230 deaths per city. The city-specific rate ratios are all expressed in relation to Portage, Wisconsin.
The results? Summarized in the original study as:
Only the highlighted categories have Confidence Intervals (CIs) that DO NOT include the NULL (risk ratio of 1 — which indicates no difference in effect found). All of the CIs that don’t include “1” have a range that starts very low. The chart shows clearly that it is chiefly Former and Current Smokers and those with Occupational Exposure (to gases, fumes, or dust) that show even a simple associational effect from fine-particulate air pollution.
Another look at the data from the study:
Again, we see (highlighted in PINK) that it is Current Smokers, Former Smokers (but not evenly — only female former smokers and 10-Pack-years male former smokers), men with less than a high school education [probably a marker for socio-economic status – kh] and women with high BMIs that show even small associational effects. ALL other classifications show the 95% CIs include the NULL effect rate ratio of 1.
The cities are listed in order of least-pollution to highest-pollution. ONLY Steubenville — highlighted in YELLOW — the most polluted city, has a significant result, and that only for men.
What does “includes the NULL effect rate ratio of 1” mean?
These two cartoon images demonstrate that Rate Ratios that include the rate ratio of 1 are compatible with the NULL hypothesis that there is NO EFFECT. For a result to be significant and reject the NULL of No Effect, the Rate Ratio must NOT span the rate ratio value of 1.
What does that mean for the Six Cities study findings?
Very few of the statistical results in the Six Cities Study meet the requirements for being significant and rejecting the null hypothesis of “no effect”. Those that pass this simple basic test have results that are very small and are directly related to other known causes for the posited effect — smoking, occupational exposure, low socio-economic status, and high BMI. When comparing “more polluted cities” to the “least polluted city” ONLY ONE city, the most polluted city — Steubenville, Ohio — shows any significant effect at all. Even with Steubenville, the effect is very small with a rate ratio of only 1.26.
For a short introduction on the topic of evaluating environmental epidemiological results, see this seminal paper: ”The environment and disease: association or causation?” by Sir Austin Bradford Hill from the Journal of the Royal Society of Medicine.
Let’s look at Sir Austin Bradford Hill’s six factors for considering results:
- Strength of the association — The Six Cities effect findings are very small — effect ratios are not 4 times, 10 times, 40 times — the strongest of the findings between cities is only 1.26 with a CI of 1.06 to 1.50, barely missing including the null (no effect) value of 1.
- Consistency of the observed association: The Six Cities findings are not consistent across cities’ air pollution levels, or between genders. The greatest consistency is with smoking status — current or former — but not with air pollution levels.
- Temporal relationship of the association – which is the cart and which the horse? The Six Cities study followed the cohort for 14-to-16 years. There is no data in the published study that relates how long the subjects lived in the cities under consideration — so this factor cannot be evaluated.
- Biological gradient, or dose-response curve: The rate ratios between cities — by pollution levels — do not demonstrate a dose-response curve — effects are not consistently larger as pollution levels increase, effects are not consistent between genders, and only the most polluted city shows a significant effect, and that only for men.
- Biologically plausible? It is biologically plausible that air pollution could cause increased mortality. It is not biologically plausible that air pollution would only cause increased mortality in the pattern shown in the study results.
- Coherence — association “should not seriously conflict with the generally known facts”: The results are coherent with some known factors: Smoking (current or former) causes increased mortality, occupational exposure to “gases, fumes, or dust” causes increased mortality, low socio-economic status is associated with increased mortality, and high BMI is associated with increased mortality. Extremely high levels of air pollution, think the killing smogs of London in the 1950s are associated with increased mortality. So, it is possible that air pollution at the levels found in these six cities could cause increased mortality. However, the weak results of the study are not sufficient to show this to be the case.
This quick review of the Six Cities study is not meant to be a serious or deep-dive analysis — it is just what is seems, a quick overview of its strengths and weaknesses. Despite claims from the Harvard’s T.H. Chan School of Public Health that this study revealed “a strong link between air pollution and mortality risk”, this review highlights why there is concern — bordering on the hysterical — that the authors might be forced to make the underlying data available for re-analysis by researchers not involved in the original work.
And the other studies being protected by anti-STIRS efforts?
Here’s the famous California study:
These are Relative Risks — only those highlighted in yellow are significant. All others have CIs that include the null effect value of 1. The most biologically plausible effect for PM2.5, lung cancer, has the highest RR for PM2.5 of 1.103 (0.985-1.234), highlighted in pink — vanishing small and failing the significance test.
The concern seems to be that if these results were to be re-analyzed by others, outside the original research group: Would even these very small associations disappear? Or would the re-analysis team deem them so small as to be irrelevant to anyone’s health?
Are such tiny effects real in the Real World?
I am not a statistician nor am I an environmental epidemiologist. I do have a good head for numbers — and I understand the basic concepts discussed above.
I can see why there is concern among researchers who have been advocating that very small amounts of air pollution are dangerous to the health of Americans (and, by extension, all humans) that these studies might be re-examined in the light of rigorous and strict scientific and statistical standards and found wanting. If they were my studies — and thus my reputation — I would be running scared at the idea that someone would really dig in, armed with all the original data, from a duly skeptical viewpoint and expose the inherent weaknesses of the analysis and subsequent findings.
When effects are this small, it is extremely possible that the effects are not real, but are artifacts of the statistical methods used in the original analysis. If these findings had had Relative Risks or Risk Ratios of 4.0 or 7.9 or any value that might indicate a strong association, then I would be more convinced. But with so many of the metrics not even passing the most basic test of significance, I am concerned that the findings represent only what John P.A. Ioannidis has termed “simply accurate measures of the prevailing bias.”
We see, in the defense of these studies, the wrong-headed viewpoint often found in some scientific fields, including epidemiological studies, that “lots of studies finding small associational or correlational results” are equal in truth-value to “one or two studies that find incontrovertibly strong results.”
High-time for Re-analysis
The problem with foundational studies such as these is that later work is based on the supposition that these studies findings are discovered truth and thus these studies’ findings are used as starting points, assumptions, in future studies. With so many governmental regulations being based on studies such as these, maybe it is high time that the basic data from these studies — suitably cleaned of data that might identify individuals and reveal their personal health information — be made available for strenuous re-analysis by disinterested researchers and statisticians. This is the stated purpose of the E.P.A.’s proposed “Strengthening Transparency in Regulatory Science” rule.
If the evidence from the studies is strong and convincing, and their methods valid and proper, then the studies will be upheld and their results validated. If not, then Science might possibly begin the process of scientific self-correction.
In either case, there is no downside, it is a Win-Win: the state of human knowledge will be improved and advanced.
# # # # #
This is an OPINION piece. Please feel free to disagree with my opinion and leave comments expressing your opinion.
This Secret Science battle is very important — if the forces of common sense and rigorous science prevail, the world will be better for it. If not, we will be condemned to be ruled by weak correlational research findings that are fueled by the desire to provide support for advocacy positions — many of which are not, in the commonly accepted sense, a reflection of the real world.