Science fraud is hard to catch, but easier to fight

9 months ago

Like many of Internet, PubPeer it’s kind of a place where you might want to be anonymous. There, under randomly assigned taxonomic names like Actinopolispora biscresis (bacteria) and Hoya camphor (flowering plant), “detectives” carefully document errors in the scientific literature. While they write about all sorts of errors, from inept statistics to nonsensical methodology, their collective experience is image manipulation: protein clouds with suspiciously sharp edges, or the same arrangement of cells in two supposedly different experiments. Sometimes these inaccuracies mean nothing more than that the researcher tried to embellish the figure before submitting it to the journal. But they raise red flags nonetheless.

PubPeer’s rare community of science detectives has produced an unusual celebrity: Elizabeth Beek, who uses her uncanny insight to dot duplication of images which would be invisible to almost any other observer. Such duplication could allow scientists to create results out of nowhere, joining parts of many Frankenstein images together, or claiming that one image represents two separate experiments that produced similar results. But even Beek’s uncanny eye has limitations: you can simulate experiments without actually using the same image twice. “If there is a slight overlap between two photos, I can nail you,” she says. “But if you move the sample a little further, I can’t find the overlap.” When the world’s most prominent expert can’t always spot a scam, fighting it or even studying it can seem impossible.

However, good scientific practice can effectively reduce the impact of fraud, i.e. outright forgery, on science, whether or not it is ever discovered. Fraud “cannot be excluded from science, just as we cannot exclude murder in our society,” says Marcel van Assen, principal investigator at the Meta-Research Center at the Tilburg School of Social and Behavioral Sciences. But as researchers and advocates continue to push for science to be more open and impartial, he says fraud “will be less common in the future.”

Along with sleuths like Beek, “meta-scientists” like van Assen are the world’s experts on fraud. These researchers systematically monitor the scientific literature to ensure it is as accurate and reliable as possible. In its current incarnation, metascience has existed since 2005, when John Ioannidis, a once-glorious professor at Stanford University who recently fell into disrepute for his views about the Covid-19 pandemic, such as fierce opposition to lockdowns, published an article with the provocative headline “Why Most Published Research Results Are False“. Small sample sizes and bias, Ioannidis argued, meant that incorrect conclusions often end up in the literature, and these errors are too rarely found because scientists would rather promote their own research programs than try to replicate the work of colleagues. Since this article, metascientists have honed their methods of studying bias, a term that covers everything from so-called “questionable research practices” — failing to publish negative results or running statistical tests over and over again until you find something interesting, for example — to direct fabrication or falsification of data.

They determine the pulse of this bias by looking not at individual studies but at general patterns in the literature. When, for example, small studies on a particular topic show more impressive results than larger studies, this can bias indicator. Smaller studies are more variable, so some of them will accidentally turn out to be dramatic, and in a world where dramatic results are favored, these studies will be published more often. Other approaches include looking at p-values, numbers that indicate whether a given result is statistically significant or not. If, in the literature on a given research question, too many p-values ​​appear to be significant and too few do not, then scientists may use dubious approaches to try to make their results more meaningful.

But these patterns do not show the extent to which this bias is due to fraud rather than dishonest data analysis or innocent mistakes. In some ways, fraud is inherently unmeasurable, says Jennifer Byrne, professor of molecular oncology at the University of Sydney. worked to identify potentially fraudulent documents in the oncology literature. “Fraud is about intent. It’s a psychological state of mind,” she says. “How to infer the state of mind and intentions from a published article?”

Leave a Reply