Is the Official Covid-19 Death Toll Accurate?

By James D. Agresti – Re-Blogged From WUWT

Overview

Roughly two-thirds of U.S. residents don’t believe the CDC’s official tally for the number of Covid-19 deaths. This distrust, however, flows in opposing directions. A nationally representative survey conducted by Axios/Ipsos in late July 2020 found that 37% of adults think the real number of C-19 fatalities in the U.S. is lower than reported, while 31% think the true death toll is greater than reported.

Continue reading

Report: Almost half of all COVID-19 deaths are linked to nursing homes

Re-Blogged From Informing America

Nearly half of all COVID-19 deaths in the United States are connected to nursing homes and long-term care centers, according to a new report from the New York Times.

The analysis found that there were more than 54,000 residents and workers at nursing homes and long-term care centers who have died from coronavirus-related illnesses. There were over 282,000 people infected at 12,000 senior facilities across the country. COVID-19 cases at nursing homes made up only 11% of all COVID-19 cases, but accounted for approximately 43% of the total U.S. deaths.

Continue reading

Climate Statistics 101: See the Slide Show AOC Tried, and Failed, to Censor

By Caleb Rossiter – Re-Blogged From WUWT

This is the slide show and 20-minute talk that Representatives Alexandria Ocasio-Cortez and Chellie Pingree tried to censor at the LibertyCon 2020 conference in Washington, D.C. After Dr. Rossiter gave a climate talk at LibertyCon 2019, they wrote to sponsors of the event, such as Google and Facebook, and asked them not to fund any event with an appearance by “climate deniers” from the CO2 Coalition. See http://co2coalition.org/2019/01/30/representatives-ocasio-cortez-and-pingree-and-climate-change-debate/

Continue reading

Climate Change and a Pandemic of Lies

THE health establishment was looking away when the coronavirus struck; it had other priorities. If you look at the World Health Organisation’s list of health threats, number one is climate change. Pandemics were down in third place, behind ‘non-communicable diseases’ such as diabetes and obesity.

Wherever you look, you will find some of the biggest names in the public health establishment declaiming on the risks of climate change to world health. On the eve of the outbreak, the Royal Society of Tropical Medicine and Hygiene declared that we would be seeing ‘mass migration, emerging infectious diseases such as dengue and a shortage of food’. As the first people fell ill in Wuhan, the WHO announced that in ten years we would be seeing 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress as a result of global warming. Epidemiologist Professor Andy Haines told readers of the Telegraph that ‘climate change is a threat to global and national security that is costing lives and livelihoods right now’. 

Weekly Climate and Energy News Roundup #409

By Ken Haapala, President, Science and Environmental Policy Project

Re-Blogged From WUWT

Quote of the Week: “When you can measure what you are speaking about, and express it in numbers, you know something about it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarely, in your thoughts advanced to the stage of science.” – William Thomson, 1st Baron Kelvin

Number of the Week: ZERO – $0

Red Team Vs Blue Team: Various organizations, such as the military, cybersecurity, etc. use a red team vs blue team conflict where the blue team uses the conventional thinking and tactics of the organization and the red team tries to break and / or exploit weaknesses in the conventional approach. Over the past several years there has been an effort to establish such a mental conflict to demonstrate the strengths and weaknesses of the approach used by the UN Intergovernmental Panel on Climate Change (IPCC) and its followers. Thus far the effort has failed, and Washington is geared to the election cycle, making it unlikely such an approach will be used until after the elections, if ever.

Continue reading

Death and Disease From Climate-Sensitive Diseases

By Indur M. Goklany – Re-Blogged From WUWT

Between 1990 and 2017, the cumulative age-standardized death rate (ASDRs) from climate-sensitive diseases and events (CSDEs) dropped from 8.1% of the all-cause ASDR to 5.5%, while the age-standardized burden of disease, measured by disability-adjusted life years lost (DALYs) declined from 12.0% to 8.0% of all-cause age-standardized DALYs. Thus, the burdens of death and disease from CSDEs are small, and getting smaller.

Figure 1: Climate-related deaths are a small proportion of all-cause fatalities (1990–2017). Based on data per IHME (2019).

But readers of the 2019 report of the Lancet Countdown (hereafter ‘the Countdown’), a partnership of 35 academic institutions and UN agencies, established by the prestigious Lancet group of medical journals and supported by the equally-esteemed Wellcome Trust to track progress on the health impacts of climate change, may well be left with the opposite impression, particularly if they do not delve beyond the Executive Summary, the section most likely to be read by busy policymakers or their advisors.

Not once does it mention that cumulative annual rates of death and disease from CSDEs are declining, and declining faster than the corresponding all-cause rates. The Countdown also fails to provide adequate context for the reader to judge the burdens of mortality or disease posed by CSDEs, individually or cumulatively, relative to other public-health threats. In fact, it even suggests that the health effects of climate change are ‘worsening’. But the data do not support that claim. Moreover, an analysis of the text makes it clear that the Countdown conflates estimates of increasing exposure, ‘demographic vulnerability’, and increased ‘suitability’ of disease transmission with actual health effects. These estimates are used as proxies, but trends in these estimates have not been verified to reflect, and do not track, long-term trends in deaths or death rates.

Figure 2: Burden of mortality from CSDEs, 1990–2017. The Forces of Nature group excludes deaths from geophysical causes per EMDAT (2019). Data per IHME (2019).

Continue reading

Why the Current Economic Slowdown Won’t Show Up in the Atmospheric CO2 Record

By Dr. Roy Spencer – Re-Blogged From WUWT

Summary: Atmospheric levels of carbon dioxide (CO2) continue to increase with no sign of the global economic slowdown in response to the spread of COVID-19. This is because the estimated reductions in CO2 emissions (around -11% globally during 2020) is too small a reduction to be noticed against a background of large natural variability. The reduction in economic activity would have to be 4 times larger than 11% to halt the rise in atmospheric CO2.

Changes in the atmospheric reservoir of CO2 occur when there is an imbalance between surface sources and sinks of CO2. While the global land and ocean areas emit approximately 30 times as much CO2 into the atmosphere as humans produce from burning of fossil fuels, they also absorb about an equal amount of CO2. This is the global carbon cycle, driven mostly by biological activity.

Continue reading

The #COVID19 Emergency Is Over!

By Willis Eschenbach – Re-Blogged From WUWT

Around the world, both state and local governments looked at wildly exaggerated computer model projections of millions of virus deaths, declared a “State Of Emergency”, and foolishly pulled the wheels off of their own economies. This has caused pain, suffering, and loss that far exceeds anything that the virus might do.

The virus hardly affects anyone—it has killed a maximum of 0.1% of the population in the very worst-hit locations. One-tenth of one measly percent.

Ah, I hear you saying, but that’s just deaths. What about hospitalizations? Glad you asked. Hospitalizations in the worst-hit areas have been about three times that, about a third of one percent of the population. Still not even one percent.

Continue reading

Of Tests and Confirmed Cases

By Willis Eschenbach – Re-Blogged From WUWT

I’ve been saying for some time now that the number of confirmed cases is a very poor way to measure the spread of the coronavirus infection. This, I’ve said, is because the number of new cases you’ll find depends on how much testing is being done. I’ve claimed that if you double your tests, you’ll get twice the confirmed cases.

However, that position was based on logic alone. I did not have one scrap of data to support or confirm it.

Max Roser is the data display genius behind the website Our World In Data. He has recently finished his coronavirus testing dataset, covering the patchwork quilt of testing in various countries. The data is available here.

Continue reading

Climate Change – Ebb and Flow of the Tide –Part 2 of 3

By Dr Kelvin Kemm – Re-Blogged From WUWT

Continued from Part 1

Emotional, agenda-driven politics confronts sound, evidence-based science

The topic of global warming and climate change is far more scientifically complex than the public is led to believe.

Myriads of newspaper, magazine and TV items over decades have tended to simplify the science to the point at which the general public believes that it is all so simple that any fool can see what is happening. Public groups often accuse world leaders and scientists of being fools, if they do not instantly act on simple messages projected by individuals or public groups.

One often hears phrases like: ‘The science is settled.’ It is not. Even more worrying is that the reality of the correct science is actually very different to much of the simple public perception.

Continue reading

Futile Fussings – a History of Graphical Failure From Cattle to #Coronavirus

By Kevin Kilty – Re-Blogged From WUWT

No planning is likely possible without calculations of what the future may hold, but such calculations are fraught with uncertainty when they also involve exponential processes. Indeed, as the author of one chapter in a recent book [1] states:

“One characteristic of an exponential growth process that humans find it really difficult to comprehend is how fast such a process actually is. Our daily experiences do not prepare us to judge such a process accurately, or to make sensible predictions.” [emphasis is mine.]

Quests to reveal a future governed by exponential processes, or what people guess to be exponential processes, run through many themes here at WUWT — future climate, energy demand, economics, epidemics. This guest contribution takes a selected look at exponential growth. Two examples are historical, and perhaps obscure, but pertinent. The third one, which comprises the bulk of this essay, is an examination of R0, which dominates the present imagination.

Continue reading

The Danger of Making #Coronavirus Decisions Without Reliable Data

From STAT – Re-Blogged From WUWT

A fiasco in the making? As the coronavirus pandemic takes hold, we are making decisions without reliable data

By John P.A. Ioannidis

The current coronavirus disease, Covid-19, has been called a once-in-a-century pandemic. But it may also be a once-in-a-century evidence fiasco.

At a time when everyone needs better information, from disease modelers and governments to people quarantined or just social distancing, we lack reliable evidence on how many people have been infected with SARS-CoV-2 or who continue to become infected. Better information is needed to guide decisions and actions of monumental significance and to monitor their impact.

Draconian countermeasures have been adopted in many countries. If the pandemic dissipates — either on its own or because of these measures — short-term extreme social distancing and lockdowns may be bearable. How long, though, should measures like these be continued if the pandemic churns across the globe unabated? How can policymakers tell if they are doing more good than harm?

Continue reading

Diamond Princess Mysteries

By Willis Eschenbach – R4e-Blogged From WUWT

OK, here are my questions. We had a perfect petri-dish coronavirus disease (COVID-19) experiment with the cruise ship “Diamond Princess”. That’s the cruise ship that ended up in quarantine for a number of weeks after a number of people tested positive for the coronavirus. I got to wondering what the outcome of the experiment was.

So I dug around and found an analysis of the situation, with the catchy title of Estimating the infection and case fatality ratio for COVID-19 using age-adjusted data from the outbreak on the Diamond Princess cruise ship (PDF), so I could see what the outcomes were.

As you might imagine, before they knew it was a problem, the epidemic raged on the ship, with infected crew members cooking and cleaning for the guests, people all eating together, close living quarters, lots of social interaction, and a generally older population. Seems like a perfect situation for an overwhelming majority of the passengers to become infected.

Continue reading

Corona Virus Map

  By Bob Shapiro

Johns Hopkins has a Corona Virus map that appears to be updated daily showing where confirmed cases have been reported, along with stats on deaths and recoveries. It gives a breakdown by country – I was surprised that the US showed 15 confirmed cases.

Take a look at: https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6

CONTINUE READING –>

Explaining the Discrepancies Between Hausfather et al. (2019) and Lewis&Curry (2018)

[This is a technical analysis of climate models vs observations by an econometrician who helped show that Michael Mann’s “Hockey Stick” was false.  –Bob]

By Ross McKitrick – Re-Blogged From WUWT

Challenging the claim that a large set of climate model runs published since 1970’s are consistent with observations for the right reasons.

Introduction

Zeke Hausfather et al. (2019) (herein ZH19) examined a large set of climate model runs published since the 1970s and claimed they were consistent with observations, once errors in the emission projections are considered. It is an interesting and valuable paper and has received a lot of press attention. In this post, I will explain what the authors did and then discuss a couple of issues arising, beginning with IPCC over-estimation of CO2 emissions, a literature to which Hausfather et al. make a striking contribution. I will then present a critique of some aspects of their regression analyses. I find that they have not specified their main regression correctly, and this undermines some of their conclusions. Using a more valid regression model helps explain why their findings aren’t inconsistent with Lewis and Curry (2018) which did show models to be inconsistent with observations.

Outline of the ZH19 Analysis:

A climate model projection can be factored into two parts: the implied (transient) climate sensitivity (to increased forcing) over the projection period and the projected increase in forcing. The first derives from the model’s Equilibrium Climate Sensitivity (ECS) and the ocean heat uptake rate. It will be approximately equal to the model’s transient climate response (TCR), although the discussion in ZH19 is for a shorter period than the 70 years used for TCR computation. The second comes from a submodel that takes annual GHG emissions and other anthropogenic factors as inputs, generates implied CO2 and other GHG concentrations, then converts them into forcings, expressed in Watts per square meter. The emission forecasts are based on socioeconomic projections and are therefore external to the climate model.

Continue reading

Wet Years, Dry Years

By Willis Eschenbach – Re-Blogged From WUWT

I keep reading all kinds of claims that the slight warming we’ve been experiencing over the last century has already led to an increase in droughts. A few years ago there were a couple of very dry years here in California, and the alarmists were claiming that “global warming” had put us into “permanent drought”.

Of course, the rains returned. This season we’re at about 120% of normal … it’s called “weather”.

In any case, I thought I’d take a look at the severity of droughts in the US over the last century. I always like to take a look at the longest dataset I can find. In this case, I got the data from NOAA’s CLIMDIV dataset. Figure 1 shows the monthly variations from 1895 to the present. Note that I’ve inverted the Y-axis on the graph, so higher on the graph is dryer, and down near the bottom is wetter.

Figure 1. Monthly Palmer Drought Severity Index for the continental US, 1895-2019. Above the dashed line is dryer, below the line is wetter.

Continue reading

What if There is no Climate Emergency ?

By edmhdotme – Re-Blogged From WUWT

What if there is no Catastrophic Risk from Man-made Global Warming ?
What if Man-made CO2 emissions are not the “Climate Control Knob” ?
What if Man-made CO2 emissions really are a non-problem ?
But what if there is a real Global Cooling Catastrophe in the offing ?
screenshot-2019-11-20-at-17.57.06

Continue reading

If You Put Junk Science In, You’ll Get Junk Science Out

By Chris Martz – Re-Blogged From WUWT

 

There are plenty of climate scientists in the world that I highly respect, many of whom I don’t share the same views with on climate change. However, these scientists are respectful towards others, they’re pretty honest with their data, and still have scientific integrity.

There are a select few scientists out there, however, whom I have lost all respect for - Dr. Michael Mann being one of them.

Continue reading

Alternate Inflation Charts

Re-Blogged From Shadow Stats

The CPI chart on the home page reflects our estimate of inflation for today as if it were calculated the same way it was in 1990. The CPI on the Alternate Data Series tab here reflects the CPI as if it were calculated using the methodologies in place in 1980. In general terms, methodological shifts in government reporting have depressed reported inflation, moving the concept of the CPI away from being a measure of the cost of living needed to maintain a constant standard of living.

Further definition is provided in our  CPI Glossary. Further background on the SGS-Alternate CPI series is available in our Public Comment on Inflation Measurement.

Continue reading

L. A. Times Hypes Propaganda Denying Global Wide Medieval Warm Period and Little Ice Age

By Larry Hamlin – Re-Blogged From WUWT

The Los Angeles Times is at it again hyping anti science climate alarmist propaganda trying to conceal the global wide Medieval Warm Period and the Little Ice Age that are supported and justified by hundreds of scientific studies.

clip_image002

This climate alarmist propaganda Times article cites a new “study” that ridiculously attempts to deny these clearly established warm and cool periods in our past.

This alarmist hyped new “study” is addressed in a superb article at the JoNova website demonstrating the complete lack of scientific veracity of this studies claims.

Continue reading

Resolution and Hockey Sticks, Part 1

By David Middleton – Re-Blogged From WUWT

Resolution vs. Detection

Geoscientists in the oil & gas industry spend much of our time integrating data sets of vastly different resolutions.  Geological data, primarily well logs, are very high resolution (6 inches to 2 feet vertical resolution).  Geophysical data, primarily reflection seismic surveys, are of much lower and highly variable resolution, dependent on the seismic velocities of the rocks and the frequency content of the seismic data.  The rule of thumb is that a stratigraphic unit must be at least as thick as one-quarter of the seismic wavelength (λ/4) to be resolved.

Figure 1a. Seismic wavelength vs velocity for 10, 25, 50 and 100 Hz dominant frequencies. (SEG Wiki)

Continue reading

Scientific Hubris and Global Warming

By Gregory Sloop – Re-Blogged From WUWT

Notwithstanding portrayals in the movies as eccentrics who frantically warn humanity about genetically modified dinosaurs, aliens, and planet-killing asteroids, the popular image of a scientist is probably closer to the humble, bookish Professor, who used his intellect to save the castaways on practically every episode of Gilligan’s Island. The stereotypical scientist is seen as driven by a magnificent call, not some common, base motive. Unquestionably, science progresses unerringly to the truth.

This picture was challenged by the influential twentieth-century philosopher of science Thomas Kuhn, who held that scientific ”truth” is determined not as much by facts as by the consensus of the scientific community. The influence of thought leaders, rewarding of grants, and scorn of dissenters are used to protect mainstream theory. Unfortunately, science only makes genuine progress when the mainstream theory is disproved, what Kuhn called a “paradigm shift.” Data which conflict with the mainstream paradigm are ignored instead of used to develop a better one. Like most people, scientists are ultimately motivated by financial security, career advancement, and the desire for admiration. Thus, nonscientific considerations impact scientific “truth.”

Continue reading

Rein in the Four Horsemen of Irreproducibility

   By Dorothy Bishop – Re-Blogged From Nature

Dorothy Bishop describes how threats to reproducibility, recognized but unaddressed for decades, might finally be brought under control.

More than four decades into my scientific career, I find myself an outlier among academics of similar age and seniority: I strongly identify with the movement to make the practice of science more robust. It’s not that my contemporaries are unconcerned about doing science well; it’s just that many of them don’t seem to recognize that there are serious problems with current practices. By contrast, I think that, in two decades, we will look back on the past 60 years — particularly in biomedical science — and marvel at how much time and money has been wasted on flawed research.

How can that be? We know how to formulate and test hypotheses in controlled experiments. We can account for unwanted variation with statistical techniques. We appreciate the need to replicate observations.

Continue reading

Greta’s Two Degrees

By Josh – Re-Blogged From WUWT

Josh comes through with another cartoon. In case you missed it, over the weekend, Willis Eschenbach published “Planet-Sized Experiments – we’ve already done the 2°C test”

One of the truths in that article was directed at this past weekend’s “climate strikes”, inspired by 16 year old Greta Thunberg. Willis makes a lot of sense with this introduction to data:

People often say that we’re heading into the unknown with regards to CO2 and the planet. They say we can’t know, for example, what a 2°C warming will do because we can’t do the experiment. This is seen as important because for unknown reasons, people have battened on to “2°C” as being the scary temperature rise that we’re told we have to avoid at all costs.

But actually, as it turns out, we have already done the experiment.
Below I show the Berkeley Earth average surface temperature record for Europe. Europe is a good location to analyze, because some of the longest continuous temperature records are from Europe. In addition, there are a lot of stations in Europe that have been taking record for a long time. This gives us lots of good data.
Continue reading

30 Years of NOAA Tide Gauge Data Debunk 1988 Senate Hearing

By Larry Hamlin – Re-Blogged From WUWT

NOAA has updated its coastal tide gauge measurement data through year 2018 with this update now providing 30 years of actual data since the infamous 1988 Senate hearings that launched the U.S. climate alarmist political propaganda campaign.

In June of 1988 testimony was provided before Congress by various scientists, including NASA’s Dr. James Hansen, claiming that man made greenhouse gas emissions were responsible for increasing global temperatures with the New York Times reporting, “Global Warming Has Begun, Experts Tells Senate”.

The Times article noted that “The rise in global temperature is predicted to cause a thermal expansion of the oceans and to melt glaciers and polar ice, thus causing sea levels to rise by one to four feet by the middle of the next century. Scientists have already detected a slight rise in sea levels.”

Continue reading

BoM’s Changes to Darwin’s Climate History

By Dr. Jennifer Marohasy – Re-Blogged From WUWT

The hubris of the Australian Bureau of Meteorology is on full display with its most recent remodelling of the historic temperature record for Darwin. The Bureau has further dramatically increased the rate of global warming at Darwin by further artificially lowering historic temperatures.

This begins by shortening the historical temperature record so that it begins just after the very hot years of the Federation drought. Then by changing the daily values: that is changing the observed measured temperatures to something else.

For example, on 1st January 1910 the maximum temperature recorded at the Darwin post office was 34.2 degrees Celsius.

Continue reading

Sea Level and Effective N

By Willis Eschenbach – Re-Blogged From WUWT

Over in the Tweeterverse, I said I wasn’t a denier, and I challenged folks to point out what they think I deny. Brandon R. Gates took up the challenge by claiming that I denied that sea level rise is accelerating. I replied:

Brandon, IF such acceleration exists it is meaninglessly tiny. I can’t find any statistically significant evidence that it is real. HOWEVER, I don’t “deny” a damn thing. I just disagree about the statistics.

Brandon replied:

> IF such acceleration exists

It’s there, Willis. And you’ve been shown.

> it is meaninglessly tiny

When you have a better model for how climate works, then you can talk to me about relative magnitudes of effects.

Continue reading

Global Mean Surface Temperature

By Bob Tisdale – Re-Blogged From WUWT

This is a long post: 3500+ words and 22 illustrations. Regardless, heretics of the church of human-induced global warming who frequent this blog should enjoy it. Additionally, I’ve uncovered something about the climate models stored in the CMIP5 archive that I hadn’t heard mentioned or seen presented before. It amazed even me, and I know how poorly these climate models perform. It’s yet another level of inconsistency between models, and it’s something very basic. It should help put to rest the laughable argument that climate models are based on well-documented physical processes.

INTRODUCTION

After isolating 4 climate model ensemble members with specific characteristics (explained later in this introduction), this post presents (1) observed and climate model-simulated global mean sea surface temperatures, and (2) observed and climate model-simulated global mean land near-surface air temperatures, all during the 30-year period with the highest observed warming rate before the year 1950. The climate model outputs being presented are those stored in the Coupled Model Intercomparison Project Phase 5 (CMIP5) archives, which were used by the Intergovernmental Panel on Climate Change (IPCC) for their 5th Assessment Report (AR5). Specifically, the ensemble member outputs being presented are those with historic forcings through 2005 and RCP8.5 (worst-case scenario) forcings thereafter. In other words, the ensemble members being presented during this early warming period are being driven with historic forcings, and they are from the simulations that later include the RCP8.5 forcings.

Continue reading

Reassessing the RCPs

By Kevin Murphy – Re-Blogged From WUWT

A response to: “Is RCP8.5 an impossible scenario?”. This post demonstrates that RCP8.5 is so highly improbable that it should be dismissed from consideration, and thereby draws into question the validity of RCP8.5-based assertions such as those made in the Fourth National Climate Assessment from the U.S. Global Change Research Program.

Analyses of future climate change since the IPCC’s 5th Assessment Report (AR5) have been based on representative concentration pathways (RCPs) that detail how a range of future climate forcings might evolve.

Several years ago, a set of RCPs were requested by the climate modeling research community to span the range of net forcing from 2.6 W/m2 to 8.5 W/m2 (in year 2100 relative to 1750) so that physics within the models could be fully exercised. Four of them were developed and designated as RCP2.6, RCP4.5, RCP6.0 and RCP8.5. They have been used in ongoing research and as the basis for impact analyses and future climate projections.

Continue reading

Another Climate Propaganda Story Promoting the Normal as Abnormal

By Dr. Tim Ball – Re-Blogged From WUWT

Almost every day there are stories in the media about weather or climate events that create the impression that they are new and outside of the normal pattern. None of them are. The objective is to sensationalize the story, even if it means using a meaningless period. A simple trick is to pick a period in which your claim is valid. This practice of cherry-picking the period of study is not exclusive to the media. It was a clear sign of corruption of climatology brought to a head with Roseanne D’Arrigo’s infamous comment to the 2006 National Academy of Science (NAS) panel that “if you are going to make a cherry pie, you have to pick cherries.”

That doesn’t condone the media use of the technique. All it does is illustrate why it was a convenient technique for creating a deception about what is normal. For example, a 2017 BBC headline said “Hottest June day since summer of 1976 in heatwave.” That is 41 years, which is statistically significant but not climatologically significant. A Youtube story reports “Sydney has wettest November day since 1984.” CBS Pittsburgh reported “NWS: 2018 is the 2nd Wettest Year on Record in Pittsburgh.” The record began in 1871 or 147 years ago, but even that is not climatologically significant. The ones I like are this one from North Carolina, that says, “A Look Back at the Coldest day Ever in North Carolina.” “Ever” is approximately 4.5 billion years.

Continue reading

We Should Ditch GDP As A Measure Of Economic Activity

By Alasdair Macleod – Re-Blogged From Silver Phoenix

This article exposes the false economic concepts behind GDP, which is only the visible tip of a large iceberg of economic deceit. Describing an increase in GDP as economic growth owes its meagre validity to imprecise definition. An economy does not grow, only the quantity of fiat currency deployed grows. A successful economy progresses our condition, our wealth, our standards of living. The evolution of misleading statistics such as GDP to their current condition is only governed by their usefulness to governments, not as an objective development of sound theory by seekers of truth.

There are perhaps two plausible reasons for producing the GDP statistic, other than employing statisticians, and both have nothing to do with economics. By compiling the figures, a government keeps track of its tax base, and it can enter into the game of my-country-is-bigger-than-yours.

Continue reading

The ‘Trick’ of Anomalous Temperature Anomalies

By Kip Hansen  Re-Blogged From WUWT

It seems that every time  we turn around, we are presented with a new Science Fact that such-and-so metric — Sea Level Rise, Global Average Surface Temperature, Ocean Heat Content, Polar Bear populations, Puffin populations — has changed dramatically — “It’s unprecedented!” — and these statements are often backed by a graph illustrating the sharp rise (or, in other cases, sharp fall) as the anomaly of the metric from some baseline.  In most cases, the anomaly is actually very small and the change is magnified by cranking up the y-axis to make this very small change appear to be a steep rise (or fall).  Adding power to these statements and their graphs is the claimed precision of the anomaly — in Global Average Surface Temperature, it is often shown in tenths or even hundredths of a Centigrade degree.  Compounding the situation, the anomaly is shown with no (or very small) “error” or “uncertainty” bars, which are, even when shown,  not error bars or uncertainty bars  but actually statistical Standard Deviations (and only sometimes so marked or labelled).

GlobalTemp_420

Continue reading

Almost Earth-like, We’re Certain

By Kip Hansen – Re-Blogged From WUWT

There has been a lot of news recently about exoplanets. An extrasolar planet is a planet outside our solar system.  The Wiki article has a list of exoplanets.  I only mention exoplanets because there is a set of criteria for specifications of what could turn out to be an “Earth-like planet”, of interest to space scientists, I suppose, as they might harbor “life-as-we-know-it” and/or be a potential colonization target.

One of those specifications for an Earth-like planet is an appropriate average surface temperature, usually said to be 15 °C.  In fact, our planet, Sol 3 or simply Earth, is very close to qualifying as Earth-like as far as surface temperature goes. Here’s that featured image full-sized:

1374177879538

Continue reading

Researcher: Beware Scientific Studies — Most Are Wrong

By AFP – Re-Blogged From Newsmax Health

A few years ago, two researchers took the 50 most-used ingredients in a cookbook and studied how many had been linked with a cancer risk or benefit, based on a variety of studies published in scientific journals.

The result? Forty out of 50, including salt, flour, parsley, and sugar. “Is everything we eat associated with cancer?” the researchers wondered in a 2013 article based on their findings.

Continue reading

The Yield Curve and Recession

   By Bob Shapiro

The Federal Reserve (FED) has raised interest rates 7 times during its latest tightening cycle, after almost 10 years of its previous rate suppression binge.

What tended to have happened in previous interest rate tightenings is that shorter term interest rates have risen somewhat faster than long rates, and at some point, short rates catch up to and pass long rates. This rare situation is referred to as an ‘Inverted Yield Curve.’

Continue reading

‘The Data Thugs’

By Peter D. Tillman – Re-Blogged From http://www.WattsUpWithThat.com

Got your attention, didn’t it? But they are actually the good guys — two working scientists who, behind the scenes, have had striking success in bringing on retractions by publicly calling out questionable data. Their work was written up in Science Magazine in a freely-available article, here.

Once a problematic paper has been identified, it’s seldom straightforward getting it fixed.  Nick Brown and James Heathers have had unusual numbers of successes, perhaps because they start out low-key, but don’t hesitate to go public if they get no response. Other would-be whistle-blowers have had less success, as the Science article describes  in some detail. One whistle-blower’s efforts attracted legal threats — another scenario WUWT readers will recall, with  a few progressing to actual lawsuits. The litigious Dr. Michael Mann comes to mind.

Continue reading

Nearly 102 Million Working Age Americans Do Not Have A Job

By Michael Snyder – Re-Blogged From Freedom Outpost

Don’t get too excited about the “good employment numbers” that you are hearing about from the mainstream media.  The truth is that they actually aren’t very good at all.  For years, the federal government has been taking numbers out of one category and putting them into another category and calling it “progress”, and in this article, we will break down exactly what has been happening.  We are being told that the U.S. unemployment rate has fallen to “3.8 percent”, which is supposedly the lowest that it has been “in nearly 50 years”.  If these were honest numbers that would be great news.  But these are not honest numbers…

Continue reading

Errorless Global Mean Sea Level Rise

By Kip Hansen – Re-Blogged From http://www.WattsUpWithThat.com

Have you ever noticed that whenever NASA or NOAA presents a graph of satellite-era Global Mean Sea Level rise, there are no error bars?  There are no Confidence Intervals?  There is no Uncertainty Range?   In a previous essay on SLR, I annotated the graph at the left to show that while tide gauge-based SLR data had (way-too-small) error bars, satellite-based global mean sea level was [sarcastically] “errorless” — meaning only that it shows no indication of uncertainty.

errorless

Continue reading

Paleoclimate Study From China Suggests Warmer Temperatures in the Past

By Anthony Watts – Re-Blogged From http://www.WattsUpWithThat.com

People send me stuff. Today in my inbox, WUWT regular Michael Palmer sends this note:

My wife Shenhui Lang found and translated an interesting article from 1973 that attempts the reconstruction of a climate record for China through several millennia (see attached).  The author is long dead (he died in 1974), and “China Daily” is now the name of an English language newspaper established only in 1981. I think it would be very difficult to even locate anyone holding the rights to the original, and very unlikely for anyone to take [copyright] issue with the publication of the English translation.

The paper is interesting in that it shows a correlation between height of the Norwegian snow line and temperature in China for the last 5000 years.


Sea Level: Rise and Fall – Computational Hubris

By Kip Hansen – Re-Blogged From http://www.WattsUpWithThat.com

[Part 3 of 3 -Bob]

Sea Level RiseMeasured from Space?

There have been so many very good essays on Global Sea Level Rise by persons all of whom have a great deal more expertise than I.   Jo Nova hosts a dozen or so excellent essays, which point at another score of papers and publications, for the most part clearly demonstrating that there are two contrarian positions on sea level rise in the scientific community:  1) Sea level has risen, is rising and will continue to rise at a rate approximately 8-12 inches (20-30 centimeters) per century — due to geological and long-term climatic forces well beyond our control;  and 2a) Other than explicit cases of Local Relative SLR, the sea does not appear to be rising much over the last 50-70 years, if at all.  2b) If it is rising due to general warming of the climate it will not add much to position 1.

GMSLR_sm

Continue reading

Does Air Pollution Really Shorten Life Spans?

[I highly recommend Dr Goklany’s book “The Improving State of Our World.” It’s a thick book for not so much money. -Bob]

By Dr. Indur M. Goklany – Re-Blogged From http://www.WattsUpWithThat.com

Periodically we are flooded with reports of air pollution episodes in various developing countries, and claims of their staggering death toll, and consequent reductions in life spans. The Economic Times (India), for example, recently claimed:

Continue reading

SEA LEVEL: Rise and Fall- Part 2 – Tide Gauges

By Kip Hansen – Re-Blogged From http://www.WattsUpWithThat.com

Why do we even talk about sea level and sea level rise?

tide-gauge_boardThere are two important points which readers must be aware of from the first mention of Sea Level Rise (SLR):

  1. SLR is a real concern to coastal cities, low-lying islands and coastal and near-coastal densely-populated areas. It can be real problem. See Part 1 of this series.
  2. SLR is not a threat to much else — not now, not in a hundred years — probably not in a thousand years — maybe, not ever. While it is a valid concern for some coastal cities and low-lying coastal areas, in a global sense, it is a fake problem. 

In order to talk about Sea Level Rise, we must first nail down Sea Level itself.

Continue reading

NOAA Global Temperature Data Doesn’t Prove Global Warming’

Mikhail Voloshin via his Facebook page:- Re-Blogged From http://www.WattsUpWithThat.com

[A deceptively important essay showing that, even accepting official, massaged global temperature data sets at face value, the CAGW Alarmists are very far from showing that Natural Variation is not causing the observed rise since 1880. -Bob]

Random Walk analysis of NOAA global temperature anomaly data

Summary

The global temperature record doesn’t demonstrate an upward trend. It doesn’t demonstrate a lack of upward trend either. Temperature readings today are about 0.75°C higher than they were when measurement began in 1880, but you can’t always slap a trendline onto a graph and declare, “See? It’s rising!” Often what you think is a pattern is actually just Brownian motion. When the global temperature record is tested against a hypothesis of random drift, the data fails to rule out the hypothesis. This doesn’t mean that there isn’t an upward trend, but it does mean that the global temperature record can be explained by simply assuming a random walk. The standard graph of temperatures over time, despite showing higher averages in recent decades than in earlier ones, doesn’t constitute a “smoking gun” for global warming, neither natural nor anthropogenic; merely drawing a straight line from beginning to end and declaring it a trend is a grossly naive and unscientific oversimplification, and shouldn’t be used as an argument in serious discussions of environmental policy.

Continue reading

An Inconvenient Deception

By Dr. Roy Spencer – Re-Blogged From http://www.WattsUpWithThat.com

How Al Gore Distorts Climate Science and Energy Policy – Al Gore has provided a target-rich environment of deceptions in his new movie.

After viewing Gore’s most recent movie, An Inconvenient Sequel: Truth to Power, and after reading the book version of the movie, I was more than a little astounded. The new movie and book are chock-full of bad science, bad policy, and factual errors.

So, I was inspired to do something about it. I’d like to announce my new e-book, entitled An Inconvenient Deception: How Al Gore Distorts Climate Science and Energy Policy, now available on Amazon.com.

Continue reading

The Laws of Averages: Part 1, Fruit Salad

By Kip Hansen – Re-Blogged From http://www.WattsUpWithThat.com

1st_Law_of_Averages_sm

Averages: A Primer

As both the word and the concept “average” are subject to a great deal of confusion and misunderstanding in the general public and both word and concept have seen an overwhelming amount of “loose usage” even in scientific circles, not excluding peer-reviewed journal articles and scientific press releases,  let’s have a quick primer (correctly pronounced “primer”), or refresher,  on averages (the cognizanti can skip this bit and jump directly  to  Fruit Salad).

Continue reading

Whatever Happened to the Invisible Hand of Capitalism?

By Vitaliy Katsenelson – Re-Blogged From Contrarian Edge

When I was growing up in the Soviet Union, our local grocery store had two types of sugar: The cheap one was priced at 96 kopecks (Russian cents) a kilo and the expensive one at 104 kopecks. I vividly remember these prices because they didn’t change for a decade. The prices were not set by sugar supply and demand but were determined by a well-meaning bureaucrat (who may even have been an economist) a thousand miles away. If all Russian housewives (and househusbands) had decided to go on an apple pie diet and started baking pies for breakfast, lunch and dinner, sugar demand would have increased but the prices still would have been 96 and 104 kopecks. As a result, we would have had a shortage of sugar — a very common occurrence in the Soviet era.

In a capitalist economy, the invisible hand serves a very important but underappreciated role: It is a signaling mechanism that helps balance supply and demand. High demand leads to higher prices, telegraphing suppliers that they’ll make more money if they produce extra goods. Additional supply lowers prices, bringing them to a new equilibrium. I am slightly embarrassed as I write this, because you may confuse me for an economist — I am not one. But this is how prices are set for millions of goods globally on a daily basis in free-market economies.

Continue reading

The Meaning and Utility of Averages as it Applies to Climate

By Clyde Spencer – Re-Blogged From http://www.WattsUpWithThat.com

Introduction

I recently had a guest editorial published here on the topic of data error and precision. If you missed it, I suggest that you read it before continuing with this article. This will then make more sense. And, I won’t feel the need to go back over the fundamentals. What follows is, in part, prompted by some of the comments to the original article. This is a discussion of how the reported, average global temperatures should be interpreted.

Averages

Averages can serve several purposes. A common one is to increase accuracy and precision of the determination of some fixed property, such as a physical dimension. This is accomplished by confining all the random error to the process of measurement. Under appropriate circumstances, such as determining the diameter of a ball bearing with a micrometer, multiple readings can provide a more precise average diameter. This is because the random errors in reading the micrometer will cancel out and the precision is provided by the Standard Error of the Mean, which is inversely related to the square root of the number of measurements.

Continue reading

Are Claimed Global Record-Temperatures Valid?

[This excellent article on accuracy and precision of temperature data under-exagerates. Less than 10% of official NWS stations meet required standards on siting – the best stations have a precision of only +/- 1 degree Celcius! The worst are over +/- 5 degrees. -Bob]
By Clyde Spencer – Re-Blogged F5rom http://www.WattsUpWithThat.com

The New York Times claims 2016 was the hottest year on record. Click for article.

Guest essay by Clyde Spencer

Introduction

The point of this article is that one should not ascribe more accuracy and precision to available global temperature data than is warranted, after examination of the limitations of the data set(s). One regularly sees news stories claiming that the recent year/month was the (first, or second, etc.) warmest in recorded history. This claim is reinforced with a stated temperature difference or anomaly that is some hundredths of a degree warmer than some reference, such as the previous year(s). I’d like to draw the reader’s attention to the following quote from Taylor (1982):

“The most important point about our two experts’ measurements is this: like most scientific measurements, they would both have been useless, if they had not included reliable statements of their uncertainties.”

Before going any further, it is important that the reader understand the difference between accuracy and precision. Accuracy is how close a measurement (or series of repeated measurements) is to the actual value, and precision is the resolution with which the measurement can be stated. Another way of looking at it is provided by the following graphic:

image

The illustration implies that repeatability, or decreased variance, is a part of precision. It is, but more importantly, it is the ability to record, with greater certainty, where a measurement is located on the continuum of a measurement scale. Low accuracy is commonly the result of systematic errors; however, very low precision, which can result from random errors or inappropriate instrumentation, can contribute to individual measurements having low accuracy.

Accuracy

For the sake of the following discussion, I’ll ignore issues with weather station siting problems potentially corrupting representative temperatures and introducing bias. However, see this link for a review of problems. Similarly, I’ll ignore the issue of sampling protocol, which has been a major criticism of historical ocean pH measurements, but is no less of a problem for temperature measurements. Fundamentally, temperatures are spatially-biased to over-represent industrialized, urban areas in the mid-latitudes, yet claims are made for the entire globe.

There are two major issues with regard to the trustworthiness of current and historical temperature data. One is the accuracy of recorded temperatures over the useable temperature range, as described in Table 4.1 at the following link:

http://www.nws.noaa.gov/directives/sym/pd01013002curr.pdf

Section 4.1.3 at the above link states:

“4.1.3 General Instruments. The WMO suggests ordinary thermometers be able to measure with high certainty in the range of -20°F to 115°F, with maximum error less than 0.4°F…”

In general, modern temperature-measuring devices are required to be able to provide a temperature accurate to about ±1.0° F (0.56° C) at its reference temperature, and not be in error by more than ±2.0° F (1.1° C) over their operational range. Table 4.2 requires that the resolution (precision) be 0.1° F (0.06° C) with an accuracy of 0.4° F (0.2° C).

The US has one of the best weather monitoring programs in the world. However, the accuracy and precision should be viewed in the context of how global averages and historical temperatures are calculated from records, particularly those with less accuracy and precision. It is extremely difficult to assess the accuracy of historical temperature records; the original instruments are rarely available to check for calibration.

Precision

The second issue is the precision with which temperatures are recorded, and the resulting number of significant figures retained when calculations are performed, such as when deriving averages and anomalies. This is the most important part of this critique.

If a temperature is recorded to the nearest tenth (0.1) of a degree, the convention is that it has been rounded or estimated. That is, a temperature reported as 98.6° F could have been as low as 98.55 or as high as 98.64° F.

The general rule of thumb for addition/subtraction is that no more significant figures to the right of the decimal point should be retained in the sum, than the number of significant figures in the least precise measurement. When multiplying/dividing numbers, the conservative rule of thumb is that, at most, no more than one additional significant figure should be retained in the product than that which the multiplicand with the least significant figures contains. Although, the rule usually followed is to retain only as many significant figures as that which the least precise multiplicand had. [For an expanded explanation of the rules of significant figures and mathematical operations with them, go to this Purdue site.]

Unlike a case with exact integers, a reduction in the number of significant figures in even one of the measurements in a series increases uncertainty in an average. Intuitively, one should anticipate that degrading the precision of one or more measurements in a set should degrade the precision of the result of mathematical operations. As an example, assume that one wants the arithmetic mean of the numbers 50., 40.0, and 30.0, where the trailing zeros are the last significant figure. The sum of the three numbers is 120., with three significant figures. Dividing by the integer 3 (exact) yields 40.0, with an uncertainty in the next position of ±0.05 implied.

Now, what if we take into account the implicit uncertainty of all the measurements? For example, consider that, in the previously examined set, all the measurements have an implied uncertainty. The sum of 50. ±0.5 + 40.0 ±0.05 + 30.0 ±0.05 becomes 120. ±0.6. While not highly probable, it is possible that all of the errors could have the same sign. That means, the average could be as small as 39.80 (119.4/3), or as large as 40.20 (120.6/3). That is, 40.00 ±0.20; this number should be rounded down to 40.0 ±0.2. Comparing these results, with what was obtained previously, it can be seen that there is an increase in the uncertainty. The potential difference between the bounds of the mean value may increase as more data are averaged.

It is generally well known, especially amongst surveyors, that the precision of multiple, averaged measurements varies inversely with the square-root of the number of readings that are taken. Averaging tends to remove the random error in rounding when measuring a fixed value. However, the caveats here are that the measurements have to be taken with the same instrument, on the same fixed parameter, such as an angle turned with a transit. Furthermore, Smirnoff (1961) cautions, ”… at a low order of precision no increase in accuracy will result from repeated measurements.” He expands on this with the remark, “…the prerequisite condition for improving the accuracy is that measurements must be of such an order of precision that there will be some variations in recorded values.” The implication here is that there is a limit to how much the precision can be increased. Thus, while the definition of the Standard Error of the Mean is the Standard Deviation of samples divided by the square-root of the number of samples, the process cannot be repeated indefinitely to obtain any precision desired!1

While multiple observers may eliminate systematic error resulting from observer bias, the other requirements are less forgiving. Different instruments will have different accuracies and may introduce greater imprecision in averaged values.

Similarly, measuring different angles tells one nothing about the accuracy or precision of a particular angle of interest. Thus, measuring multiple temperatures, over a series of hours or days, tells one nothing about the uncertainty in temperature, at a given location, at a particular time, and can do nothing to eliminate rounding errors. A physical object has intrinsic properties such as density or specific heat. However, temperatures are ephemeral and one cannot return and measure the temperature again at some later time. Fundamentally, one only has one chance to determine the precise temperature at a site, at a particular time.

The NOAA Automated Surface Observing System (ASOS) has an unconventional way of handling ambient temperature data. The User’s Guide says the following in section 3.1.2:

“Once each minute the ACU calculates the 5-minute average ambient temperature and dew point temperature from the 1-minute average observations… These 5-minute averages are rounded to the nearest degree Fahrenheit, converted to the nearest 0.1 degree Celsius, and reported once each minute as the 5-minute average ambient and dew point temperatures…”

This automated procedure is performed with temperature sensors specified to have an RMS error of 0.9° F (0.5° C), a maximum error of ±1.8° F (±1.0° C), and a resolution of 0.1° F (0.06° C) in the most likely temperature ranges encountered in the continental USA. [See Table 1 in the User’s Guide.] One (1. ±0.5) degree Fahrenheit is equivalent to 0.6 ±0.3 degrees Celsius. Reporting the rounded Celsius temperature, as specified above in the quote, implies a precision of 0.1° C when only 0.6 ±0.3° C is justified, thus implying a precision 3 to 9-times greater than what it is. In any event, even using modern temperature data that are commonly available, reporting temperature anomalies with two or more significant figures to the right of the decimal point is not warranted!

Consequences

Where these issues become particularly important is when temperature data from different sources, which use different instrumentation with varying accuracy and precision, are used to consolidate or aggregate all available global temperatures. Also, it becomes an issue in comparing historical data with modern data, and particularly in computing anomalies. A significant problem with historical data is that, typically, temperatures were only measured to the nearest degree (As with modern ASOS temperatures!). Hence, the historical data have low precision (and unknown accuracy), and the rule given above for subtraction comes into play when calculating what are called temperature anomalies. That is, data are averaged to determine a so-called temperature baseline, typically for a 30-year period. That baseline is subtracted from modern data to define an anomaly. A way around the subtraction issue is to calculate the best historical average available, and then define it as having as many significant figures as modern data. Then, there is no requirement to truncate or round modern data. One can then legitimately say what the modern anomalies are with respect to the defined baseline, although it will not be obvious if the difference is statistically significant. Unfortunately, one is just deluding themselves if they think that they can say anything about how modern temperature readings compare to historical temperatures when the variations are to the right of the decimal point!

Indicative of the problem is that data published by NASA show the same implied precision (±0.005° C) for the late-1800s as for modern anomaly data. The character of the data table, with entries of 1 to 3 digits with no decimal points, suggests that attention to significant figures received little consideration. Even more egregious is the representation of precision of ±0.0005° C for anomalies in a Wikipedia article wherein NASA is attributed as the source.

Ideally, one should have a continuous record of temperatures throughout a 24-hour period and integrate the area under the temperature/time graph to obtain a true, average daily temperature. However, one rarely has that kind of temperature record, especially for older data. Thus, we have to do the best we can with the data that we have, which is often a diurnal range. Taking a daily high and low temperature, and averaging them separately, gives one insight on how station temperatures change over time. Evidence indicates that the high and low temperatures are not changing in parallel over the last 100 years; until recently, the low temperatures were increasing faster than the highs. That means, even for long-term, well-maintained weather stations, we don’t have a true average of temperatures over time. At best, we have an average of the daily high and low temperatures. Averaging them creates an artifact that loses information.

When one computes an average for purposes of scientific analysis, conventionally, it is presented with a standard deviation, a measure of variability of the individual samples of the average. I have not seen any published standard deviations associated with annual global-temperature averages. However, utilizing Tchebysheff’s Theorem and the Empirical Rule (Mendenhall, 1975), we can come up with a conservative estimate of the standard deviation for global averages. That is, the range in global temperatures should be approximately four times the standard deviation (Range ≈ ±4s). For Summer desert temperatures reaching about 130° F and Winter Antarctic temperatures reaching -120° F, that gives Earth an annual range in temperature of at least 250° F; thus, an estimated standard deviation of about 31° F! Because deserts and the polar regions are so poorly monitored, it is likely that the range (and thus the standard deviation) is larger than my assumptions. One should intuitively suspect that since few of the global measurements are close to the average, the standard deviation for the average is high! Yet, global annual anomalies are commonly reported with significant figures to the right of the decimal point. Averaging the annual high temperatures separately from the annual lows would considerably reduce the estimated standard deviation, but it still would not justify the precision that is reported commonly. This estimated standard deviation is probably telling us more about the frequency distribution of temperatures than the precision with which the mean is known. It says that probably a little more than 2/3rds of the recorded surface temperatures are between -26. and +36.° F. Because the median of this range is 5.0° F, and the generally accepted mean global temperature is about 59° F, it suggests that there is a long tail on the distribution, biasing the estimate of the median to a lower temperature.

Summary

In summary, there are numerous data handling practices, which climatologists generally ignore, that seriously compromise the veracity of the claims of record average-temperatures, and are reflective of poor science. The statistical significance of temperature differences with 3 or even 2 significant figures to the right of the decimal point is highly questionable. One is not justified in using the approach of calculating the Standard Error of the Mean to improve precision, by removing random errors, because there is no fixed, single value that random errors cluster about. The global average is a hypothetical construct that doesn’t exist in Nature. Instead, temperatures are changing, creating variable, systematic-like errors. Real scientists are concerned about the magnitude and origin of the inevitable errors in their measurements.

CONTINUE READING –>

Climate Science and Climatology; Forest and Trees

By Dr. Tim Ball – Re-Blogged From http://www.WattsUpWithThat.com

Many jumped to the defense of Dr. John Bates, the former NOAA employee who waited until he retired to disclose malfeasance in the science and management at that agency. Bates claimed he told his bosses about the problem but said they effectively ignored him. The problem is everything was already in the public record. I listened to Congressman Lamar Smith tell the general audience at the June 2015 Heartland Conference that subpoenas were filed requesting full disclosure of all the material. He also told us the requests were rejected, but a follow-up was in progress. The same information was reported in the mainstream media, albeit with a bias. Why didn’t Bates go to Smith in confidence and report what he knew? The Smith requests must have been the talk of the office or at least the water cooler.

Continue reading