Once upon a time, before massive government funding, science involved a mix of guessing a theory to explain a natural phenomenon and doing a lot of grunt work to see if the first guy’s theory held up to testing.
To some degree, that still is true, however the theorizing part is more sexy, and more likely to gain grant money. Since the grant money
frequently comes from government wonks who have some kind of agenda, or at least a bias, government drives the direction of research in many fields.
Much of the checking to see if the theory is correct is done by tens-of-thousands of skeptical researchers – using their own funds. Because they do this for the love of it, rather than to make a living, they are considered amateurs. Amateurs also have various biases – they’re human just like the paid staff.
These amateurs have given us great advances in knowledge, such as more accurate figures for the circumference of the Earth, the speed of light, and the nature of heredity. Not all amateurs do good work though – just as not all of the pros do good work.
Just about the only way that you or I could tell if a finding is worth anything at all, with our limited knowledge of their field, is to examine for two things:
- Does the research use logical fallacies (like name calling)?
- Is there data which can be verified and replicated?
It’s easy to criticize someone’s theory by saying, “Jake is an idiot, so his theory is wrong,” or “Mary’s theory that there are rocks on the moon is wrong because government scientists say that the moon is made out of green cheese.” While Jake’s and Mary’s theories indeed may be wrong, the “research” cited cannot be used as a justification.
If there is data, someone still needs to verify the data to see if it’s correct and makes sense. For example, there was a long time trend linking fashion and stock markets, which went something like this: ”When womens’ hemlines are going up, the markets will go up, and when the hemlines fall, then stock prices will tumble.” There may be data on each piece of the comparison, but there still is no demonstration of cause and effect.
And, just because someone calls it data doesn’t make it data. In the Global Warming Alarmism industry, model output usually is called data. But, since the data, and forecasts made using the data, do not match real world observations, model output is CRAP.
Sometimes researchers will try to hide the fact that they’re using models by calling it a “Reanalysis.” When you see reanalysis, think model, and then think CRAP.
But if you actually have data, you still aren’t home free. The data needs to be checked to see if it makes sense, and to see how much error might be included. It would be important for example if somebody were to say that the average Dow Jones Industrial Index during the 1900s was 5,000, for you to know that the low was 32 and the high was 10,000.
One of the many unsung heroes that I follow semi-regularly is Willis Eschenbaach. He uses his own dime, he shows how he got his results, he suggests what his results mean, and he tells you he doesn’t know if he doesn’t know.
He’s done a piece on the Argo Buoys which are used to measure Ocean Heat Content. I’ve written about these before and how this data has been used to try to explain away the 18+years of zero global temperature rise. The theory says, “The Oceans Ate My Global Warming!”
Willis uses satellite data to see if the Argo data makes sense. His results show that the error bars – the noise built into the collection process – make this data unfit to use for this purpose.