Galileo learned too late, perhaps, that publishing scientific findings can be a dangerous business. The whole endeavor is fraught with political landmines and epistemological pitfalls, and when he threw his weight behind the Copernican view of a heliocentric universe he found himself on the wrong side of the dominant narrative. In 1615, a pair of splenetic clerics denounced him to the Roman Inquisition. Almost 20 years later, after publication of his Dialogue Concerning the Two Chief World Systems, he was tried by the Inquisition and found “vehemently suspect of heresy.” Galileo, known today as “the father of modern science,” spent the rest of his life under house arrest. Half a millennium later it is still a difficult business, albeit one where the dangers lie in not publishing. Investigators no longer need worry about life and limb, of course, but in some ways the stakes are just as high. The linchpins of academic survival – prestige and funding – are inextricably linked to the publications found in a researcher’s curriculum vitae. And not just for the researcher himself; for the institution in which he works. So much so that an axiom has emerged with intimations of mortal peril. We’ve all heard it: “Publish or perish.” The big question here is: Does this impact the science itself. And if so, how? In a recent PLoS ONE paper – “Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data” – Daniele Fanelli cites a slew of studies in describing the potential for bias. Investigators in focus groups have suggested that scientific integrity is threatened by the need to publish. Researchers who have engaged in scientific misconduct have pointed to the enormous pressure to produce as a partial reason for their transgressions. And surveys have shown that those in competitive research environments are both less likely to adhere to scientific ideals and more likely to witness misconduct. In his own research, Fanelli found that, across disciplines, papers are more likely to report positive results (which are considered to be more publishable) in US states that are academically more competitive (as determined by the number of papers published per capita). This suggests, he writes, that the pressure to come up with publishable results leads investigators to do just that – by filing away negative findings, by reformulating hypotheses after experiments are completed, or through some other means of tweaking the data or analyses. Such misconduct can impede the progress of research, and contribute to an ever more cynical view of the scientific community among huge swaths of the public. The most notorious example, perhaps, is that of South Korean scientist Hwang Woo Suk, who claimed to have obtained stem cells from cloned human embryos but was later found to have fabricated his findings. More recently, the internet leak of thousands of emails from the University of East Anglia’s Climate Research Unit called into question the objectivity of investigators exploring the impact of climate change. Independent reviews found no evidence of misconduct or fraud, but even just the appearance of impropriety negatively impacted the public’s perception of the entire field of study. Significant steps have been taken to curb scientific misconduct. The National Institutes of Health and other research institutions have implemented ethics training programs for investigators, while enforcement arms have been put in place to root out malpractice (See: The Struggle to Keep Research Real). These are clearly important measures. Still, I would like to think that, even unchecked, the intense pressure to publish and the attendant potential for scientific misconduct would never totally derail progress in the research community. I say this for two reasons. First, by and large, scientists want to do the right thing. Rarely are they lured into the academic arena by the promise of a huge paycheck, or by the hordes of groupies waiting backstage. As hokey as it may sound, they really are in it for the love of science. I don’t want to diminish the seriousness of the problem – this sort of pressure can make people do crazy things – but in the end, most investigators are simply too concerned with getting it right. Also, the peer review system (which admittedly has its own knotty issues; see my above note about political landmines) is designed not only to determine the worthiness of a paper for publication, but also to establish whether the science is sound. And even if a suspect bit of research gets past the reviewers, you can be sure others will try to reproduce it. If it doesn’t hold up to scrutiny, then ultimately the findings won’t stick.