On academic fraud

Categories: Open Access
Tags:
Comments: No Comments
Published on: November 23, 2012

Mind-blowing, outrageous criminal fraud is rare in scientific research, but fraudulent practices such as turning a blind eye to contradictory data or failing to report anomalies are commonplace in laboratories worldwide. The building tide of retractions and the growing need to see all data, including anomalies and negative results, for big data experiments has brought these activities into the limelight, and preventing fraud in research has become an important issue for modern science.

Medical trials are frequently brought up in discussions about scientific fraud. It is easy to see why they are publicised, as inaccurate trials can cause thousands of people to have unnecessary or even harmful treatment. But fraud occurs in all science, and in non-medical research there are less emotive, but still serious, consequences of fraudulent activity.

I may be naïve, but I think in plant science and other areas, where the stakes are not as high as in medical research, most scientists do not deliberately reject data and only publish the minority of results that fit a favourite hypothesis. They do actively bury projects that just don’t go anywhere, as evidenced by many frustrated PhD students with a thesis full of negative results but no publications. As Ed Yong put it in the SpotOn London session ‘Fixing the Fraud,’ negative results are becoming an endangered species.

Causing someone to unwittingly replicate doomed experiments because you did not publish your perceived failed experiment may lead to wasted time and effort, but I would hesitate to call it fraud. An excellent example is described by Jim Caryl in his SciLogs blog The Gene Gym. Jim recently published his finding that a class of tetracyclin resistant genes identified in 1996 was actually a plasmid replication gene, and did not confer any kind of antibiotic resistance. The original authors did not set out to deceive, and scientists who used the gene in their research must have had negative results, which they did not publish. 

Another bad practice which I suspect is fairly common, to varying extents, is biased analysis of results. While this is definitely fraud, it has been overlooked up until now because of the need for conclusive, statistically significant data for papers. Again, Jim Caryl is an example – he struggled to get his important negative result published. Over the last decade, big data experiments requiring raw datasets have become the norm, and authors are usually obligated by their funders or by publishers to provide all their raw data. Yet frequently, datasets are not added to a suitable open access repository (Piowar, 2011), and a 2009 study showed that the rules governing open data are poorly enforced (Savage and Vickers, 2009).

At the SpotOn London session on Fixing the Fraud, the panelists briefly discussed the causes of fraud, pointing at the usual, and probably completely responsible, culprits: the pressure on researchers to publish in the most respected journal they can manage, and the requirements of journals who publish only positive results that point to significant, clear conclusions. They then discussed three ways of reducing fraud in science. The first is a shift from results-oriented to methods-focussed academic publishing; second, encouraging, or even obligating, replication of published data; and finally establishing open data practices.

If academic publishers focused more on sound experimental set-up and reliable methods than on the data generated, the temptation to manipulate data to fit a hypothesis would be reduced and publication of negative results would be encouraged. Cortex, a psychology journal, is about to adopt this model. The editors will start peer-reviewing the rationale and methods of some papers before the data is generated. An effect on general practice requires a huge culture shift however, and this will take time to seep up to the high impact journals.

The second idea discussed was making replication of experiments research in its own right. One panelist suggested encouraging PhD students to devote a chapter of their thesis to replicating a published experiment, and it was also suggested that journals had a section for data replication. While this might encourage researchers to be as honest as a judge about their data in fear or being found out as a fraud, it is impossible for much of modern science. Most research projects involve multiple researchers working over a timescale of months or years. Although a published method should be optimised and ready to repeat, in reality each laboratory has different equipment and skills, and every replication attempt would require fine-tuning. In biology, differences could be explained away by different environmental conditions in different laboratories.

Fully open data publication would eliminate the need for data replication. If it was enforced, the methods and all raw data could be examined and compared, and re-analysed if necessary. A suitably qualified expert should be able to identify falsifications or omissions. Even a complex systems biology project, like the paper I highlighted two weeks ago, could be scrutinised and re-assessed.

The problem with any attempt to prevent fraud is enforcing honesty. How will Cortex ensure research groups will publish the data from their proposed, peer-reviewed rationale rather than burying a perceived non-story, or even taking a different story to a higher impact journal? How would a PI guarantee full disclosure of their group’s data? Wouldn’t a student attempting to replicate an experiment from a leading name in their field want to replicate their results?

Discussions about scientific fraud and the problem of ‘endangered’ negative results are a sign of positive progress in science. Modern science generates large-scale data that will be re-used and re-analysed for years to come, so accuracy is more important than ever. Future and current policies aimed at preventing fraud from academic publishers and funders are necessary, but everyone doing, or on the periphery of, research must contribute to a culture that makes fraudulent practices completely unacceptable.

Image credit: Shinealight, via Wikimedia Commons. 



No Comments - Leave a comment

Leave a Reply


Close Print