Plagarism and Open Science

While I’m on the fence regarding the calls for open data* (most recently by PLoSOne), this point by DrugMonkey is something that’s always bothered me (boldface mine):

The second incident has to do with accusations of self-plagiarism based on the sorts of default Methods statements or Introduction and/or Discussion points that get repeated. Look there are only so many ways to say “and thus we prove a new facet of how the PhysioWhimple nucleus controls Bunny Hopping”. Only so many ways to say “The reason BunnyHopping is important is because…”. Only so many ways to say “We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus by….”. This one is particularly salient because it works against the current buzz about replication and reproducibility in science. Right? What is a “replication” if not plagiarism? And in this case, not just the way the Methods are described, the reason for doing the study and the interpretation. No, in this case it is plagiarism of the important part. The science. This is why concepts of what is “plagiarism” in science cannot be aligned with concepts of plagiarism in a bit of humanities text.

The whole self-plagarism concept has always bothered me. What it forces scientists to do is, having found an optimal (or just pretty good) way of describing something–especially methods–we are now forced to rewrite it in a crappier version. Even with methods, there are times you don’t want to just write ““We used optogenetic techniques to activate the gertzin neurons in the PhysioWhimple nucleus as described in Humperdink et al. (2010).” One often wants to, at least, summarize the methods, so the reader doesn’t have to search through several papers to replicate the methods–or to justify within the paper the methods that were used.

The point is to efficiently communicate information: we aren’t trying to write prize winning fiction (or even a sucky blog). Cries of self-plagarism get in the way of that.

*The best argument for open data comes from economics: the Reinhart and Rogoff paper that was used to justify austerity. Other economists had tried to obtain the underlying data, and when they finally did (after three years), their conclusions were very flawed. Which cost a lot of non-economists, not Reinhart or Rogoff, their jobs. Oops.

This entry was posted in Publishing. Bookmark the permalink.

4 Responses to Plagarism and Open Science

  1. dr2chase says:

    All true, but there are times it would not hurt to put the blah-blah-blah in a subtly different font, so that we can skip to the good bits.

  2. MarkK says:

    With machine plagiarism detection, could a standard be set where you have a section of a paper that is indicated as boilerplate or “common method” section or something like that where the machines ignore what is in that section.

    The original point of papers was to communicate. If having some standard method described right there so a reader can check it but can skip on without having to click elsewhere is valuable then it shouldn’t be considered bad.

  3. Min says:

    I understand the problem of self-plagiarism in undergraduate papers. Otherwise you may get credit for something more than once. I had no idea that it was frowned upon in scientific papers when the claim is precisely to have done what was done before.

  4. Newcastle says:

    Hmm I guess we’re guilty of self-plagiarism because we have used boilerplate methods sections for things like IHC & westerns and PCR. The methods used were identical even if the overall research was different. Having just wasted much of the afternoon tracking down the true definition of a plasmid construct mentioned in a review article that referenced a second review article that referenced a paper that referenced a second paper that referenced supplementary materials in a third paper that referenced a fourth paper’s supplementary materials before I finally found the damned answer (insert rage here) well a little self-plagiarism would be a good thing.

Comments are closed.