On academic fraud

Categories: Open Access
Comments: No Comments
Published on: November 23, 2012

Mind-blowing, outrageous criminal fraud is rare in scientific research, but fraudulent practices such as turning a blind eye to contradictory data or failing to report anomalies are commonplace in laboratories worldwide. The building tide of retractions and the growing need to see all data, including anomalies and negative results, for big data experiments has brought these activities into the limelight, and preventing fraud in research has become an important issue for modern science.

Medical trials are frequently brought up in discussions about scientific fraud. It is easy to see why they are publicised, as inaccurate trials can cause thousands of people to have unnecessary or even harmful treatment. But fraud occurs in all science, and in non-medical research there are less emotive, but still serious, consequences of fraudulent activity.

I may be naïve, but I think in plant science and other areas, where the stakes are not as high as in medical research, most scientists do not deliberately reject data and only publish the minority of results that fit a favourite hypothesis. They do actively bury projects that just don’t go anywhere, as evidenced by many frustrated PhD students with a thesis full of negative results but no publications. As Ed Yong put it in the SpotOn London session ‘Fixing the Fraud,’ negative results are becoming an endangered species.

Causing someone to unwittingly replicate doomed experiments because you did not publish your perceived failed experiment may lead to wasted time and effort, but I would hesitate to call it fraud. An excellent example is described by Jim Caryl in his SciLogs blog The Gene Gym. Jim recently published his finding that a class of tetracyclin resistant genes identified in 1996 was actually a plasmid replication gene, and did not confer any kind of antibiotic resistance. The original authors did not set out to deceive, and scientists who used the gene in their research must have had negative results, which they did not publish. 

Another bad practice which I suspect is fairly common, to varying extents, is biased analysis of results. While this is definitely fraud, it has been overlooked up until now because of the need for conclusive, statistically significant data for papers. Again, Jim Caryl is an example – he struggled to get his important negative result published. Over the last decade, big data experiments requiring raw datasets have become the norm, and authors are usually obligated by their funders or by publishers to provide all their raw data. Yet frequently, datasets are not added to a suitable open access repository (Piowar, 2011), and a 2009 study showed that the rules governing open data are poorly enforced (Savage and Vickers, 2009).

At the SpotOn London session on Fixing the Fraud, the panelists briefly discussed the causes of fraud, pointing at the usual, and probably completely responsible, culprits: the pressure on researchers to publish in the most respected journal they can manage, and the requirements of journals who publish only positive results that point to significant, clear conclusions. (more…)

SpotOn London 2012 in brief

Comments: 1 Comment
Published on: November 15, 2012

This weekend, Ruth and I were in London for SpotOn London 2012 at the Wellcome Trust. There were too many incredible sessions to attend, let alone to cover on this little blog – but all the talks were recorded and you can see them on the SpotOn youtube channel. There will be Storifies aplenty before the end of the week, which I will tweet if they cross my path.

I plan to write at least one ‘proper’ post about the sessions I attended, but for now here are some brief summaries of the topics most discussed in the sessions I attended at SpotOn 2012.

Open data: All the speakers and delegates assumed that everyone else understood and supported open access publishing. What was more interesting was the discussions of other issues in open science – digital licensing, openness in peer review, accessibility of raw data. A longer blog post on this is forthcoming, but I recommend Ross Mounce’s blog, in particular this post on price and ‘openness’ in open access journals, for more information about open science.

Crowd-funding: Around the fringes of publically funded science are small projects supported by funds raised by the researchers. Crowd-funded science is very much in the minority, but in the UK the University of Buckingham has survived for over thirty years without government support, including research programmes. For crowd-funding, excellent marketing and PR are crucial. If you have a public-good, sexy, relatively low-cost research project in your to-do list, and you have a flair for public relations and promotion, it is worth considering. You also need to be able to reward donations in some small way. Check out crowd-funded projects by Matthew Partridge (Cranfield University) and Ethan Perstein (Princeton) to find out more, or donate to their projects. Kickstarter is the best platform to raise your funds.


page 1 of 1

Follow Me
November 2020

Welcome , today is Thursday, November 26, 2020