Search
Menu
Gentec Electro-Optics Inc   - Measure With Gentec Accuracy LB

The struggle to keep research real

Facebook X LinkedIn Email
Hank Hogan, Contributing Editor, [email protected]

Beneath a scientist’s lab coat beats a human heart. So it should be no surprise that researchers sometimes fudge – or outright fake – their data.

Often, this manipulation takes the form of a doctored image. Michael Kalichman, director of the research ethics program at the University of California, San Diego, recalls reading neuropathology papers as part of the peer review process prior to publication. Sometimes he’d find the same image appearing twice in the same submission. The second time it might be rotated or at a different magnification. In the paper, though, it would be presented as completely different from the first image.

“At the very least, somebody was sloppy in their record keeping,” Kalichman said. “At the very most, somebody was trying to mislead about what they had actually done.”

While the fraud may appear minimal, the National Institutes of Health (NIH) in Bethesda, Md., and universities and other organizations take the problem seriously. To combat it, they give tutorials to researchers and put systems in place. There also are enforcement arms that actively investigate allegations.

Fraud figures

The amount of scientific misconduct that goes on is hard to pin down, Kalichman said. There are well-known examples where a researcher has been shown to have committed fraud, such as the fake cloning claims of South Korean scientist Hwang Woo-suk. Based on the number of cases like this that have been publicly discovered and adjudicated, the rate of serious research misconduct could be one in 100,000 scientists.

However, the amount of misconduct could be much higher than this low figure indicates. An analysis of data from several surveys that appeared in PLoS One in May 2009 found that 1.97 percent of scientists admitted to serious misconduct, and more than 14 percent had witnessed it in others.

The paper’s author, Daniele Fanelli, is a research fellow at the University of Edinburgh in the UK. He noted that the first figure is probably an underestimate, since not all researchers will report their own misconduct.

There are good reasons to believe the second figure is an overestimate, he said. “Most surveys did not control for the possibility that several respondents are thinking of the same colleague.”

In one survey, however, only one researcher per department was asked about misconduct in that department. In that case, the figure was 5.25 percent, Fanelli said.

It must be remembered, however, that what is thought by others to be lab fraud or misconduct may not, in fact, be so. The Office of Research Integrity (ORI), which is part of the US Department of Health and Human Services, oversees and directs research integrity activities for the US government public health services.

In 2007, the ORI closed 28 cases, with 10 resulting in research misconduct findings, administrative actions or both. That ratio was in line with the historical average and shows that most of the allegations were not true or could not be proved.

Using technology

By some estimates, up to 20 percent of all images submitted for publication have been improperly manipulated. This figure is open to debate, in part because the definition of what kind of manipulation is allowable varies from journal to journal.

In general, it’s considered appropriate to make an adjustment if it’s done to all pixels and is disclosed. An example might be the use of false color to make the differences in a gray-scale image more easily visible.

There are limits to this general principle, however. Adjusting the brightness and contrast of a gel blot image, for example, would treat all pixels equally. However, it could result in the gray background and faint blots disappearing. Paper reviewers and journal editors must guard against this, which can be hard to detect and which can arise from innocent intentions.


The original gel blot image on the right had some data removed when submitted for publication, as can be seen by comparing the two panels in the area highlighted by the circle. Such manipulation can be innocent, but it could also be a deliberate attempt to deceive. Courtesy of Hany Farid, Dartmouth College.


Zurich Instruments AG - Boost Your Optics 1-24 MR
Attempts to deceive, on the other hand, often involve adding or subtracting pixels. That selective treatment makes it possible to catch the alterations automatically.

“The algorithms that you can develop are ones that target specific forms of manipulation,” explained Hany Farid, a professor of computer science at Dartmouth College in Hanover, N.H., and an expert on digital image forensics.

Farid demonstrated some years ago that image segmentation techniques based on intensity can be employed to detect deletion, duplication and removal of small blemishes. A tampered image that is processed through these segmentation algorithms will yield an output with visible indicators, such as solid boxes where data has been duplicated or removed. Software can then look for and find these regions.

Farid noted that the tools to automate the process do not yet exist, but he foresees a time when they will. He also noted that the algorithms will never be able to catch all possible fraudulent image manipulation.

Employing other tools

For that reason, policies must be put in place to guard against fraud. Farid, for example, advocates having researchers submit the original images along with those that will appear in a paper.

Another example of a policy solution can be found in the ethics training done at NIH. This has been regularly held for all of the agency’s own researchers for the past 10 years, said Joan P. Schwartz, the agency’s intramural research integrity officer and assistant director of the office of intramural research.

One part of this training is the use of hypothetical cases, which change from year to year. The theme for this past year was dual-use research – work that could be used to help as well as harm. A few years ago, the training involved image manipulation, which offers the potential to be both good and bad.

In all instances, the goal of these scenarios is to get everyone in a department talking, Schwartz said. “We purposely make the cases a little bit gray so that they generate discussion. They don’t necessarily have a right or wrong answer.”

In addition to the hypothetical cases, she noted that the agency has an online course that’s intended to get new employees up to speed with NIH research guidelines. It has been adopted by many universities and research organizations around the world.

Despite these efforts, Schwartz noted that the rate of misconduct appears to be holding steady. Thus, training alone is not the complete answer.

Another knob to turn

The solution may involve a change in the structure of science, said Raymond De Vries, a professor of bioethics at the University of Michigan Medical School in Ann Arbor. Together with colleagues Brian Martinson and Melissa Anderson, he has surveyed researchers to see how many self-report minor and major scientific misconduct. The second category includes such breaches as falsification, fabrication and plagiarism. The group also has collected scientists’ opinions about the fairness of the science system and about researchers’ experience with competitive pressure.

The team’s results show that minor and major misconduct are linked. Those admitting to the former are far more likely to report committing the latter. More competitive, rather than cooperative, views on research also led to more admissions of misconduct.

Another factor is the amount of perceived organizational injustice. The rewards of science are promotions, tenure, grant money, prestige and so on. These may not be distributed fairly, and scientists who report injustice in their workplace also report higher levels of misconduct. Thus, improving organizational justice – or at least how it’s perceived – may increase research integrity and decrease lab fraud.

Summing up the findings, De Vries said of scientists, “If they feel like they’re being treated fairly, they actually report less misconduct.”

Published: February 2010
Glossary
false color
In imaging technology, assigning color to black and white images to differentiate features or convey information. Also called colorizing.
algorithmsBiophotonicsBrian Martinsoncomputer scienceDaniele FanelliDartmouth Collegedeletiondigital image forensicsduplicationfabricationfalse colorfalsificationFeaturesfraudHank HoganHany FaridHwang Woo-sukimage manipulationimage segmentation techniquesimagesJoan P. SchwartzjournalMelissa AndersonMichael KalichmanmisconductNational Institutes of HealthNIHOffice of Research IntegrityORIpeer reviewpixelsplagiarismpublicationRaymond De Vriesresearch ethicsresearchersSan Diegoscientific misconductScotlandSouth Koreatampered imagesU.S. Department of Health and Human ServicesuniversitiesUniversity of CaliforniaUniversity of EdinburghUniversity of Michigan School of Medicine

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.