Big Trouble with Spiders

By Kip Hansen — Re-Blogged From WUWT

How deeply have you considered the social life of spiders?  Are they social animals or solitary animals?  Do they work together?  Do they form social networks?  Does their behavior change as in  “adaptive evolution of individual differences in behavior”?

In yet another blow to the sanctity of peer-reviewed science and a simultaneous win for personal integrity and self-correcting nature of science, there is an ongoing tsunami of retractions in a field of study of which most of us have never even heard.

only_here_for_spiders

Science magazine online covers part of the story in “Spider biologist denies suspicions of widespread data fraud in his animal personality research”:

“It’s been a bad couple of weeks for behavioral ecologist Jonathan Pruitt—the holder of one of the prestigious Canada 150 Research Chairs—and it may get a lot worse. What began with questions about data in one of Pruitt’s papers has flared into a social media–fueled scandal in the small field of animal personality research, with dozens of papers on spiders and other invertebrates being scrutinized by scores of students, postdocs, and other co-authors for problematic data.

Already, two papers co-authored by Pruitt, now at McMaster University, have been retracted for data anomalies; Biology Letters is expected to expunge a third within days. And the more Pruitt’s co-authors look, the more potential data problems they find. All papers using data collected or curated by Pruitt, a highly productive researcher who specialized in social spiders, are coming under scrutiny and those in his field predict there will be many retractions.”

The story is both a cautionary tale and an inspiring lesson of courage in the face of professional setbacks — one of each for the different players in this drama.

I’ll start with Jonathan Pruitt, who is described as “a highly productive researcher who specialized in social spiders”. Pruitt was a rising star in his field and his success led to his being offered “one of the prestigious Canada 150 Research Chairs” — where he has established himself at McMaster University in Hamilton, Ontario, Canada in the psychology department where he is listed as  the Principal Investigator at “The Pruitt Lab”.  The Pruitt Lab’s home page tells us:

“The Pruitt Lab is interested in the interactions between individual traits and the collective attributes of animal societies and biological communities. We explore how the behaviors of individual group members contribute to collective phenotypes, and how these collective phenotypes in turn influence the persistence and stability of collective units (social groups, communities, etc.). Our most recent research explores the factors that lead to the collapse of biological systems, and which factors may promote systems ability to bounce back from deleterious alternative persistent states.”

This field of study is often referred to as behavioral ecologyIn terms of research methodology, this is a difficult field — one cannot, after all, simply  administer a series of personality tests to various groups of spiders or fish or birds or amphibians.  Experimental design is difficult and not normalized within the field;  observations are in many cases by necessity quite subjective.

We have seen a recent example in the Ocean Acidification (OA) papers concerning fish behavior, in which a three-year effort failed to replicate the alarming findings about effects of ocean acidification on fish behavior.  The team attempting the replication took care to record and preserve all the data and, Science reports, “It’s an exceptionally thorough replication effort,” says Tim Parker, a biologist and an advocate for replication studies at Whitman College in Walla Walla, Washington.  Unlike the original authors, the team released video of each experiment, for example, as well as the bootstrap analysis code. “That level of transparency certainly increases my confidence in this replication,” Parker says.”

The fish behavior study is of the same nature as the Pruitt studies involving social spiders.  Someone has to watch the spiders under the varied conditions, make decisions about perceived differences in behavior, record differences in behavior, in some cases time behavioral responses to stimuli.  The results of these types of studies are in some cases entirely subjective — thus, in the OA replication, we see the care and effort to video the behaviors so that others would be able to make their own subjective evaluations.

The trouble for Pruitt came about when one of his co-authors was alerted to possible problems with data in a paper she wrote with Pruitt in 2013 (published in the Proceedings of the Royal Society B in January 2014) titled “Evidence of social niche construction: persistent and repeated social interactions generate stronger personalities in asocial spider“.

That co-author is Dr. Kate Laskowski, who now runs her own lab at the University of California at San Diego.   She was, at the time the paper was written, a PhD candidate.  I’ll let you read her story — it is inspiring to me — as she tells it in a blog post  titled “What to do when you don’t trust your data anymore”.  Read the whole thing, it might restore your faith in science and scientists.

Here’s her introduction:

“Science is built on trust. Trust that your experiments will work. Trust in your collaborators to pull their weight. But most importantly, trust that the data we so painstakingly collect are accurate and as representative of the real world as they can be.”

“And so when I realized that I could no longer trust the data that I had reported in some of my papers, I did what I think is the only correct course of action. I retracted them.”

“Retractions are seen as a comparatively rare event in science, and this is no different for my particular field (evolutionary and behavioral ecology), so I know that there is probably some interest in understanding the story behind it. This is my attempt to explain how and why I came to the conclusion that these papers needed to be removed from the scientific record.”

How did this happen?  The short story is that as a result of meeting and talking with Jonathan Pruitt at a conference in Europe, Pruitt sent Laskowski “a datafile containing the behavioral data he collected on the colonies of spiders testing the social niche hypothesis.”  Laskowski relates how the data looked good and that there was clear inference in the data that was “strong support for the social niche hypothesis”.  With such clear data, she easily wrote a paper.

“The paper was published in Proceedings of the Royal Society B (Laskowski & Pruitt 2014). This then led to a follow-up study published in The American Naturalist showing how these social niches actually conferred benefits on the colonies that had them (Laskowski, Montiglio & Pruitt 2016). As a now newly minted PhD, I felt like I had successfully established a productive collaboration completely of my own volition. I was very proud.”

The situation was a dream come true for a young researcher — and her subsequent excellent work brought her to UCSD where she established her own lab.  Then….

“Flash forward now to late 2019. I received an email from a colleague who had some questions about the publicly available data in the 2016 paper published in Am Nat. In this paper we had measured boldness 5 times prior to putting the spiders in their familiarity treatment and then 5 times after the treatment.

The colleague noticed that there were duplicate values in these boldness measures. I already knew that the observations were stopped at ten minutes, so lots of 600 values were expected (the max latency). However, the colleague was pointing out a different pattern – these latencies were measured to the hundredth of a second (e.g. 100.11) and many exact duplicate values down to two decimal places existed. How exactly could multiple spiders do the exact same thing at the exact same time?”

Lawkowski performed a forensic deep-dive into the data and discovered problems such as these (highlights indicate unlikely duplications of exact values; see Lawkowski’s blog post for larger images and more information):

suspect_duplications

Remember, Laskowski’s paper was not based on data that she had collected herself, but on data provided to her by a respected senior scientist in the field, Jonathan Pruitt.  It was data collected by Pruitt personally, not as part of a research team, but by himself.  And that point turns out to be pivotal in this story.

Let me be clear, I am not accusing Jonathan Pruitt of falsifying or manufacturing the data contained in the data file sent to Laskowski — I have not investigated the data closely myself.  Pruitt is reported to be doing field work in Northern Australia and Micronesia currently and communications with him have been sketchy — inhibiting full investigations by the journals involved.   Despite his absence, there are serious efforts to look into all the papers that involve data from Pruitt. Science magazine reports “All papers using data collected or curated by Pruitt, a highly productive researcher who specialized in social spiders, are coming under scrutiny and those in his field predict there will be many retractions.” [ source ]

A blog that covers this field of science, Eco-Evo Evo-Eco, has posted a two part series related to data integrity:  Part 1 and Part 2.  In addition, there are two specific posts on the “Pruitt retraction storm” [ here and here ] , both written by Dan Bolnick, who is editor-in-chief of The American Naturalist.   This journal has already retracted one paper based on data supplied by Pruitt, at Laskowski’s request. 

In one of the discussions this situation has spawned, Steven J. Cooke, Institute of Environmental and Interdisciplinary Science, Carleton University, Ottawa, Canada opined:

“As I reflect on recent events, I am left wondering how this could happen.  A common thread is that data were collected alone.  This concept is somewhat alien to me and has been throughout my training and career.  I can’t think of a SINGLE empirically-based paper among those that I have authored or that has been done by my team members for which the data were collected by a single individual without help from others.  To some this may seem odd, but I consider my type of research to be a team sport.  As a fish ecologist (who incorporates behavioural and physiological concepts and tools), I need to catch fish, move them about, handle them, care for them, maintain environmental conditions, process samples, record data, etc – nothing that can be handled by one person without fish welfare or data quality being compromised.” 

It wasn’t long ago that we saw this same element in another retraction story — that of Oona Lönnstedt, who was found to have “fabricated data for the paper, purportedly collected at the Ar Research Station on Gotland, an island in the Baltic Sea.”  Science Magazine quotes  Peter Eklöv, Lönnstedt’s supervisor and co-author in this Q & A:

Q: The most important finding in the new report is that Lönnstedt didn’t carry out the experiments as described in the paper; the data were fabricated. How could that have happened?

A: It is very strange. The history is that I trusted Oona very much. When she came here she had a really good CV, and I got a very good recommendation letter—the best I had ever seen.

In the case of Jonathan Pruitt, the evidence is not yet all in.  Pruitt has not had a chance to fully give his side of the story or to explain exactly how the data he collected alone could reasonably contain so many implausible duplications of overly exactly measurements.  I have no wish to convict Jonathan Pruitt in this brief overview essay.

But the issue raised is important and has wide generalisability.  It can inform us of a great danger to the reliability of scientific findings and the integrity of science in general.

When a single researcher works alone, without the interaction and support of a research team, there is the danger that shortcuts can be taken with justifying  excuses made to himself, leading to data being inaccurate  or even just filled in with expected results for convenience.  Dick Feynman’s “fooling themselves” with a twist.

Detailed research is not easy — and errors can be and are made.  Data files can become corrupted and confused.  The accidental slip of a finger on a keyboard can delete an hour’s careful spreadsheet reformatting or cast one’s carefully formatted data into oblivion.  And scientists can become lazy and fill in data where none was actually generated by experiment.  A harried researcher might find himself “forced” to “fix up” data that isn’t returning the results required by his research hypothesis, which he “knows” perfectly well is correct.  In other cases, we find researchers actively hiding data and methods from review and attempted validation by others, out of fear of criticism or failure to replicate.

There are major efforts afoot to reform the practice of scientific research in general — suggestions include requiring pre-registration of studies including their designs, methodologies, statistical methods to be applied, end points, hypotheses to be tested with all these posted to online repositories that can be reviewed by peers even before any data is collected.  Searching the internet for “saving science”, “research reform” and the “reproducibility crisis” will get you started.  Judith Curry, at Climate etc., has covered the issue over the years.

Bottom Line:

Scientists are not special and they are not gods — they are human just like the rest of us.  Some are good and honorable, some are mediocre, some are prone to ethical lapses.  Some are very careful with details, some are sloppy, all are capable of making mistakes.  This truth is contrary to what I was led to believe as a child in the 1950s, when scientists were portrayed as a breed apart — always honest and only interested in discovering the truth.  I have given up that fairy-tale version of reality.

The fact that some scientists make mistakes and that some scientists are unethical should not be used to discount or dismiss the value of Science as a human endeavor.  Despite these flaws, Science has made possible the advantages of modern society.

Those brave men and women of science that risk their careers and their reputations to call out and retract bad science, like Dr. Laskowski,  have my unbounded admiration and appreciation.

# # # # #

Author’s  Comment:

I hope readers can avoid leaving an endless stream of comments about how this-that-and-the-other climate scientist has faked or fudged his data.  I don’t personally believe that we have had many proven cases of such behavior in the field.   Climate Science has its problems: data hiding and unexplained or unjustified data adjustments have been among those problems.

The desire to “improve the data” must be tremendously tempting for researchers who have spent their grant money on a lengthy project only to find the data barely adequate or inadequate to support their hypothesis.  I sympathize but do not condone acting on that temptation.

CONTINUE READING –>

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s