Many Scientists Admit to Misconduct

Many Scientists Admit to Misconduct (WaPo, A3) via YN

Few scientists fabricate results from scratch or flatly plagiarize the work of others, but a surprising number engage in troubling degrees of fact-bending or deceit, according to the first large-scale survey of scientific misbehavior. More than 5 percent of scientists answering a confidential questionnaire admitted to having tossed out data because the information contradicted their previous research or said they had circumvented some human research protections. Ten percent admitted they had inappropriately included their names or those of others as authors on published research reports. And more than 15 percent admitted they had changed a study’s design or results to satisfy a sponsor, or ignored observations because they had a “gut feeling” they were inaccurate.

None of those failings qualifies as outright scientific misconduct under the strict definition used by federal regulators. But they could take at least as large a toll on science as the rare, high-profile cases of clear-cut falsification, said Brian Martinson, an investigator with the HealthPartners Research Foundation in Minneapolis, who led the study appearing in today’s issue of the journal Nature. “The fraud cases are explosive and can be very damaging to public trust,” Martinson said. “But these other kinds of things can be more corrosive to science, especially since they’re so common.”

The new survey also hints that much scientific misconduct is the result of frustrations and injustices built into the modern system of scientific rewards. The findings could have profound implications for efforts to reduce misconduct — demanding more focus on fixing systemic problems and less on identifying and weeding out individual “bad apple” scientists. “Science has changed a lot in terms of its competitiveness, the level of funding and the commercial pressures on scientists,” Martinson said. “We’ve turned science into a big business but failed to note that some of the rules of science don’t fit well with that model.”

None of this is particularly surprising, I guess–scientists are just people, after all–but it’s both disappointing and scary.

FILED UNDER: Education, Science & Technology
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College and a nonresident senior fellow at the Scowcroft Center for Strategy and Security at the Atlantic Council. He's a former Army officer and Desert Storm vet. Views expressed here are his own. Follow James on Twitter @DrJJoyner.


  1. DaveD says:

    Having been in close proximity to career scientists for a part of my life, this revelation comes as no surprise. The competition for grant money is intense and the need for “clean” data is a part of that. I am not trying to make any excuses, but one of the things you learn about science is that someone should be able use your notes to successfully replicate your work. However, on the flip side, there are others who discourage distributing funds for repeating experiments others have done – avoid re-inventing the wheel so to speak. This discouragement might be due to the stature of the original investigator or how entrenched the original results are in the flow of accepted dogma. To cut short my rambling, I guess I agree that this shows scientists are humans too. It also lets some air out of the exceptionally large egos of some scientists who feel they operate in a rarefied world of intellectual superiority.

  2. Anon says:

    15% is low. few scientists use proper statistical methods (such as a q test) to discard bad data. many (more than 15%) use their gut.

    the results can be dreadful, as grant after grant is spent following up an experiment that didn’t work in the first place.

    it isn’t all the time, but it’s frequent (more than 15%!). some scientists have just forgotten the q test, which they maybe learn once in their hazy sophomore year of undergrad. but some would skip it anyway, due to the intense pressure to have experiments that “work.”

    this problem is going to be corrected one way or another. hopefully scientists realize it’s in their interests to fix it themselves.

  3. John Thacker says:

    Anyone who has ever taken a lab class in high school or college has seen this at the academic level. Frequently you’re supposed to do experiments where everybody knows the proper result ahead of time. Problem is, sometimes you mess up and get results that don’t make sense, or sometimes the experiment is just too difficult. It goes on quite a bit.

    Of course, once you’re doing a real experiment, you’re not supposed to do that. But when the data “just don’t make sense,” people do discard them.

  4. Solomon2 says:

    Sad but true. What’s really bad is that it can be very difficult nowadays for the honest science student to complete a doctorate for the very reason that he or she IS honest. So the problem just keeps getting worse.

  5. Nathan says:

    This really comes as no surprise. For the past 20 years students have said that cheating to get ahead was OK. They were taught that there is no “Truth”, just subjective understanding and self-justification. The students who cheated through K-12 then proceeded to cheat through university, and grad school, and told themselves it was okay because it was for their ultimate good, and no one really checks the data anyway. Now they get caught cheating on the job, and faking results to get their “truth” out to the public. So much for teaching sceintific method, which is after all the search for the Truth of things in the material world.