January 23

“Luxury journals”, Incentives and Fallacy of Being “Right”

What is good science?

“Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe1.” A discipline based on hypothesis-driven empiricism should be concerned with methods, accuracy and reproducibility. In today’s world, particularly in biological sciences, we are in danger of losing sight of these fundamental principles. At all levels of post-graduate education (graduate school, post-doctoral fellowships and faculty hiring) scientists are judged largely by their publication record in prestigious peer-reviewed journals. Technical skills, reproducibility of results, negative results, overall contributions to the field, collaborative efforts and resource and people management skills are all seemingly secondary to publication record. As graduate students we are trained to build a paper with a story, often with a hypothesis devised to fit after the fact. Experiments that fail are often abandoned for greener pastures (i.e. a short time to publication). We chase novelty, excitement and great discoveries, rejecting mundane or “safe” projects that might most benefit the field, but fail to promote our individual careers. Most of all, we chase success in the estimation of our peers, through publication in competitive peer-reviewed journals. We resort, at times, to using journal impact factor as an indicator of good science.

Are we producing good science?

The idea that a large portion of the scientific research published today is not reproducible is not a comforting one. Yet studies attempting to replicate results consistently demonstrate this is the case. As reported by Nature in 2012, Amgen attempted to replicate the results of 53 papers in hematology and oncology for a new project and found the results of only 11 papers were reproducible.2 Nature has devoted a number of articles to the topic of reproducibility in the past year3 and the issue has captured the attention of a lay audience, with The Economist publishing a cover story titled “How Science Goes Wrong” last October. Some fields, like psychology, have taken it upon themselves to replicate key studies, with better results than the Amgen study (10 out of 13 studies replicated).4 We must consider whether reproducibility is the hallmark of successful science at all. Replication attempts may themselves be incorrectly carried out due to lack of precise knowledge of methods by would-be replicators.  However, it seems self-evident that that a discipline built upon “testable explanations and predictions about the universe” should have some measure of consistency in its testing, and as such, science as a field has some work to do.

What drives the reproducibility problem?

Why does the reproducibility issue exist? Are we being careless or deliberate in our inaccuracy? Statistics published in PNAS state that 67.4% of retractions are attributable to misconduct (43.3% fraud, 14.2% duplicate publication and 9.8% plagiarism).5 Several prominent scientists have recently been caught fabricating results, including former Harvard psychologist Marc Hauser6 and Diederick Stapel, a psychologist and Dean at Tilburg University in the Netherlands.7 A New York Times article from last April chronicles Stapel’s story and his motives for concocting results. Staple admits to being driven by ambition and frustrations in dealing with real data. But more than that: “He…realized that journal editors preferred simplicity.” The article highlights both the Stapel’s desire to find success and the seeming resistance of the system to catch or impugn him. While these extreme cases stand as examples to those who consider forging results, it is easy to imagine how scientists might be inclined to make small, unnoticeable alterations to their data in the interest of personal success when the chances of being discovered are small and the benefits large.

Why do incentives matter?

The peer-review system rose in popularity during the 20th century due to an increase in the number of submissions to scientific journals and increasing scientific specialization. The system often fails to catch mistakes and fraudulent activity while also  hindering rapid publication and dissemination of important findings.8 Randy Schekman, one of three winners of the 2013 Nobel Prize for Physiology or Medicine in 2013, and the editor at eLife (an open access journal that seeks to publish good scientific data quickly) is an advocate for change. Schekman is harshly critical of “luxury journals” such Nature, Science and Cell, which he claims skew incentives for scientists, affecting the type and quality of science research conducted. He writes: “[Science] is disfigured by inappropriate incentives…the biggest rewards follow the flashiest work, not the best.”9 Not only does the current system encourage “sexy” results over good ones (i.e. precise, reproducible, useful ones), but it drives competition between colleagues, each trying to secure one of the coveted spots in a “luxury journal” with ~8% acceptance rate. Add the stress of getting scooped after years of work, and it’s not hard to envision how some might be tempted to foul play.

What can be done?

Already, open-access journals like PLoS ONE and eLife  are competing with the “luxury journals” and new sources of funding, like the BRAIN Initiative, are in a position to reward collaboration. We can also work to change the way children grow up thinking about science.

A Nature piece by Carolyn Beans10, a biology graduate student at UVA, tells an anecdote about how children are taught about the scientific process. Elementary school children asked to write down their hypotheses (about whether a piece of tinfoil would float or sink) and record the real results after the experiment, erase their incorrect hypotheses as their peers who guess correctly rejoice. Beans tries to convince the students not to do so, to no avail. If the elementary school children in Carol Beans’ anecdote had been asked to work together from the beginning, they might have reasoned their way to the right answer with real deliberation. Even if they failed to agree on the hypothesis, they would have been a team, succeeding or failing together.

The values we instill in children at a young age stick. I saw faked results at my high school’s Science Fairs time and again. Not only do high school students, like the elementary students, want to be “right”, teachers and judges choose projects where the data fit the hypothesis without ever knowing or caring if the data are real. If we instill competition and reward students only for producing simple, predictable results, we are working against the very essence of scientific inquiry.

Not every scientist can revolutionize his or her field, no matter what the reward structure. But if we can change the incentives so that we are encouraged and rewarded for working together and focusing on the things that make for good science, we might collectively advance the field more than any individual could, and have real confidence in our findings. And that is an incentive worth working toward.


  1. http://en.wikipedia.org/wiki/Science
  2. Begley & Ellis. “Drug development: Raise standards for preclinical cancer research”. Nature  vol 483: 531-533. December 2012.
  3. “Announcement: Reducing our irreproducibility”. April 2012.
  4. Yong. “Psychologists strike a blow for reproducibility”. Nature vol 26. November 2013
  5. Fang et al. “Misconduct accounts for the majority of retracted scientific publications” PNAS. July 2012.
  6. Johnson. “Author on leave after Harvard inquiry”. Boston Globe. August 2010. http://www.boston.com/news/education/higher/articles/2010/08/10/author_on_leave_after_harvard_inquiry/
  7. Bhattacharjee. “The Mind of a Con Man.” New York Times. April 2013.
  8. Nielsen. “Three myths about scientific review.” http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/
  9. Schekman. “How journals like Nature, Cell and Science are Damaging Science”. The Guardian. December 2013. http://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science
  10. Beans. “A Faulty Hypothesis”. Nature vol  504: 321. December 2013.