Solving The Research Integrity Crisis

May 20, 2013 | Posted by Elizabeth in Research |

Earlier this month, I had the pleasure of attending the third World Conference on Research Integrity in Montreal, bringing together thought leaders on research integrity and responsible conduct in research. The Conference covered issues including the contributing factors of fabrication and systemic dishonesty, potential solutions in better training and support for whistleblowers, and larger incentives to changing the research culture.

Aggregating these respective themes, I felt it important to review the different opinions offered at the Conference. Consolidating the various themes and propositions presented can in turn allow for discussion of potential strategies to build more effective solutions to the problem of research integrity.

The Problem: Fabrication or Dishonesty?

In discussing the issue of “Research Integrity”, it was first essential to define the parameters of the discussion.

For many of the attendees, the problems surrounding research integrity were narrowly defined as those conducive to misconduct, including fabrication, falsification, and plagiarism (FFP). At one point Jim Kroll, the Head of Administrative Investigations at the NSF Office of Inspector General, contended that the NSF was solely focused on examining research misconduct, not research quality.

Such a blatant and over-arching focus on FFP behaviors has its limitations, and may be seen as an ineffective approach when such behaviors are not clearly defined. Daniele Fanelli, research fellow at the University of Edinburgh, has written explicitly about this issue (Redefine Misconduct as Distorted Reporting, Nature), stating that research misconduct needs to be more broadly defined to encompass issues such as selective reporting of data. Such selective reporting, Fanelli contends, contribute much more to the problem than fabrication or even plagiarism, which are only committed be a few big actors a year.

This same point was made by Dan Ariely, professor of behavioral economics at Duke University and author of The (Honest) Truth About Dishonesty. In his studies on dishonesty, Ariely noted that most people cheated a little bit, and very few people cheated a lot. The very few big cheaters cost ~$200, compared with the many small cheaters who collectively cost $20,000. In turn it could be seen that our own strategies for ensuring research integrity consist of trying to catch the few ‘big cheaters’, when in reality such problems exist in a pool of ‘small cheaters’ who cause much more damage to the integrity of research produced.

Misaligned Solutions: Whistleblowers, Punishment, and Training

A number of solutions were proposed at the Conference to tackle concerns of research integrity.

Past strategies have focused on catching big cheaters through a reliance on whistleblowers, many of who are students in the labs of the ‘big cheaters’. Understandably, it is incredibly difficult for a student in this position to report such behavior, questioning its actual efficacy. And even if an alternative source of whistleblowers could be found, an allegation-based system is hardly effective in ‘policing’ the entire research system.

Other punitive measures were also proposed, such as required prison sentences for research misconduct. But such solutions ignore the fact that long-term penalties have consistently proven to be ineffective deterrents to immediate misconduct. Ariely gave the poignant example of how long-term penalties such as corporal punishment have had no proven effect in deterring violent crime, where states incorporating the death penalty actually having higher rates of crime than those without.

Another strategy that was widely discussed at the Conference was better research training, but this again ignores the underlying problems systemic to misconduct. Donald Kornfeld of Columbia University examined 146 Office of Research Integrity reports from 1992-2003 (Research Misconduct: The Search For A Remedy, Acad Med), and found all researchers involved in research misconduct knew what they were doing was ‘wrong’. They didn’t need training to tell them not to fabricate, or falsify and plagiarize work.

On the question of plagiarism though, it is important to note it as a different form of misconduct, to be classified differently from fabrication and falsification. It is likely possible to educate or train people about plagiarism and how to correctly cite their work. It is also fairly easy to catch bad actors with software like iThenticate, which is already in widespread use. So it is really solutions that address fabrication and falsification that need to be addressed, where training may not be effective in preventing such actions.

Improving The Research Culture

To promote sustainable solutions to research integrity, it is more effective to take a step back and think about why we have such problems to begin with.

Ultimately it comes down to the incentive system, which in the case of academic research rewards scientists for publishing lots of positive results, with little incentive to prove reproducibility or robustness of outcomes. John Ioannidis, Professor of Epidemiology at Stanford University who showed that over 90% of oncology studies only report positive outcomes (Almost All Articles on Cancer Prognostic Markers Report Statistically Significant Results, EJC), succinctly noted that: “If you reward researchers for publishing positive results, that’s all you get”. Researchers will selectively report, manipulate or even fabricate data to get there, and at a shockingly high rate. Daniele Fanelli noted in a recent study that 1.97% of scientists admitted to fabricating or falsifying their data at least once, and up to 33.7% admitted other questionable practices (How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta Analysis of Survey Data, PLOS ONE). When asked about their colleagues, 14.12% of scientist respondents reported falsification, and up to 72% claimed other questionable research practices.

Accordingly, a sustainable strategy to solve the problem of research integrity is to change the incentive structure of research culture to reward integrity, a point firmly noted by Dan Ariely as well. In Ariely’s studies, he noted that by simply producing a culture where people witnessed cheating more openly, participants were much more likely to cheat, and to cheat more. Combined with Fanelli’s reports of scientists witnessing 72% of their colleagues engaged in questionable research practices, it is no wonder that concerns of FFP have become systemic in scientific research.

The current research environment could by consequence be compared to an environment that encourages ‘cheating’. Researchers who publish lots of papers in top journals are rewarded, leading to a high incidence of ‘sloppy’ results (Must Try Harder, Nature) and irreproducible outcomes (Why Most Published Research Findings Are False, PLOS ONE). There is really no upside or incentive for a researcher to focus on publishing validated or robust research, given the heavy emphasis on novel or exciting outcomes that lead to top publications, and in turn top faculty positions.

Further, there is a very small downside to publishing papers with sloppy or irreproducible outcomes. Retractions may be a major ‘red letter’ to most scientists, and perceived as an indication of fraud, but their scarcity in occurrence leaves for little impact on curbing sloppy work. David Vaux from the Walter+Eliza Hall Institute for Medical Research noted at the Conference that there were ~1 million publications indexed by PubMed last year, ~900,000 showing a lack of reproducible outcomes (according to Ioannidis), and only about 300 overall retractions. Retractions are clearly not a way to identify or prevent publication of unreliable research results.

Implementing an Audit System

So how do we change the research culture? Michael Farthing, Vice Chancellor of the University of Sussex, proposed a bold solution in the form of an audit system. The idea shows promise, in so much as providing a framework to change the incentive systems to reward high quality reproducible research. Audit systems have been notably and effectively utilized in other frameworks already, such as random drug testing in sports, speed cameras, and tax audits. The Open Science Framework, through its Reproducibility Project, is leading one such effort in driving replication studies in the social sciences.

Another practical way to implement such an audit system may come in the form of the Reproducibility Initiative, which I am co-directing on behalf of Science Exchange, with participation from PLOS ONE, Nature Publishing Group, Mendeley, and Figshare. The Initiative can leverage professional experimental service providers such as university core facilities or commercial service providers on the Science Exchange network, to conduct a validation service. Professional service providers are highly skilled experts in their techniques, and operate purely on a fee-for-service basis, carrying no incentive for positive or null results. They also operate outside the academic network of grants and publications, and are thus less at risk for retaliation by their peers.

In structuring the Reproducibility Initiative as an opt-in, reward based program, we can address the systemic concerns of research misconduct through positive incentive structures. Rather than being penalized for a lack of robust outcomes, which Vaux, Ioannidis, and Ariely noted as having minimal impact, all studies submitted and selected for validation will receive a publication in PLOS ONE, providing the reproduced data in an open-access format. Participating journals who in turn published the original study may reward scientists with a ‘Badge’ of reproducibility, to distinguish those papers that took the effort to validate their findings.

Combined, audit systems in the form of the Reproducibility Project and Reproducibility Initiative may serve as forward-looking solutions to the problem of research integrity, promoting a reward-based rather than a punitive-based culture for research integrity.

About the author

Elizabeth Iorns is Co-Founder & CEO of Science Exchange. Elizabeth conceived the idea for Science Exchange while an Assistant Professor at the University of Miami and as CEO she drives the company’s vision, strategy and growth. She is passionate about creating a new way to foster scientific collaboration that will break down existing silos, democratize access to scientific expertise and accelerate the speed of scientific discovery. Elizabeth has a B.S. in Biomedical Science from the University of Auckland, a Ph.D. in Cancer Biology from the Institute of Cancer Research in London, and conducted postdoctoral research in Cancer Biology from the University of Miami’s Miller School of Medicine where her research focused on identifying mechanisms of breast cancer development and progression.

About Science Exchange

We are transforming scientific collaboration by creating a marketplace where scientists can order experiments from the world's top labs.

Check the Science Exchange blog for updates, opinions, guest posts and the latest happenings at Science Exchange HQ!

Visit Science Exchange →

Subscribe to the blog
Never miss a post! Science Exchange blog posts delivered right to your inbox.
Thank you for joining the SciEx revolution!
Powered By WPFruits.com