In his new book, psychologist Stuart Ritchie paints a portrait of the modern system of research, and all the ways it gets undermined.
In 1942, sociologist Robert Merton described the ethos of science in terms of its four key values: The first, universalism, meant the rules for doing research are objective and apply to all scientists, regardless of their status. The second, communality, referred to the idea that findings should be shared and disseminated. The third, disinterestedness, described a system in which science is done for the sake of knowledge, not personal gain. And the final value, organized skepticism, meant that claims should be scrutinized and verified, not taken at face value. For scientists, wrote Merton, these were “moral as well as technical prescriptions.”
In his new book, Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth, Stuart Ritchie endorses the above as a model for how science is meant to work. “By following the four Mertonian Norms, we should end up with a scientific literature that we can trust,” he writes. He then proceeds to spend the rest of the book explaining all the ways in which modern science fails to do just this.
Ritchie is a psychologist at King’s College London and the author of a previous book, Intelligence: All That Matters, about IQ testing. In Science Fictions he presents a broad overview of the problems facing science in the 21st century. The book covers everything from the replication crisis to fraud, bias, negligence and hype. Much of his criticism is aimed at his own field of psychology, but he also covers these issues as they occur in other fields such as medicine and biology.
Underlying most of these problems is a common issue: the fact that science, as he readily concedes, is “a social construct.” Its ideals are lofty, but it’s an enterprise conducted by humans, with all their foibles. To begin with, the system of peer-reviewed funding and publication is based on trust. Peer review is meant to look for errors or misinterpretations, but it’s done under the assumption that submitted data are genuine, and that the description of the methods used to obtain them are accurate.
Ritchie recounts how in the 1970s, William Summerlin, a dermatologist at the Memorial Sloan-Kettering Cancer Center, used a black felt-tipped pen to fake a procedure in which he’d purported to graft the skin from a black mouse onto a white one. (He was caught by a lab tech who spotted the ink and rubbed it off with alcohol.) Fraudulent studies like Summerlin’s are not one-off events. A few recent examples that Ritchie cites are a researcher who was caught faking cloned embryos, another found to be misrepresenting results from trachea implant surgeries, and a third who fabricated data in a study purporting to show that door-to-door canvassing could shift people’s opinions on gay marriage. With the rise of digital photography, scientists have manipulated images to make their data comply with their expectations; one survey of the literaturefound signs of image duplication in about 4 percent of some 20,000 papers examined.
But even when they’re not committing fraud, scientists can easily be influenced by biases. One of the revelations to come from psychology’s reckoning with its replication problem is that standard statistical methods for preventing bias are in fact subject to manipulation, whether intentional or not. The most famous example of this is p-hacking, where researchers conduct their analysis in a way that produces a favorable p-value, a much-abused and misunderstood statistic that reveals something about the likelihood of getting the result you saw if there wasn’t actually a real effect. (Ritchie’s footnote for p-hacking links to my WIRED story about how the phrase has gone mainstream)Ritchie’s ambition here is to convince the reader that science is not living up to its ideals, and in that he succeeds.
An overreliance on p-values helps explain the spread of studies showing “social priming,” where subtle or subconscious cues were said to have large effects on people’s behavior. For instance, one study claimed that when people read words associated with old people (like old or gray), it made them walk more slowly down a hallway afterwards. A functional bullshit meter would have flagged this finding, and many others like it, as suspicious; but when they’re wrapped in the language of science, with an authoritative p-value and the peer-review stamp of approval, they gain a measure of credibility.
Peer review is another process that Ritchie flags as flawed by human bias, as well as perverse incentives and fraud. (Rogue researchers have been caught in self-reviewing scams, as the book points out.) There’s also publication bias, wherein null results—i.e., experiments that end up finding no effect—are left out of journals on the whole. And then there’s media hype, often blamed on journalists even though Ritchie says it rarely starts with us. “The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm,” he writes. Instead, studies have shown that overblown claims in press accounts often stem from those found in official releases from the researchers, their institutions, or the journals that published their results.
Ritchie also calls out scientists who write hype-filled books for the public. He singles out Berkeley neuroscientist Matthew Walker, asserting that Walker’s book, Why We Sleep, blatantly misinterprets the underlying science with claims that “the shorter you sleep, the shorter your life span,” and that sleeping less than six or seven hours per night demolishes your immune system and doubles your risk of cancer. “Both statements go against the evidence,” Ritchie says, pointing to independent researcher Alexey Guzey’s detailed takedown. “Walker could have written a far more cautious book that limited itself to just what the data shows, but perhaps such a book wouldn’t have sold so many copies or been hailed as an intervention that ‘should change science and medicine.’”
Hype-filled science books paper over the intricacies of real scientific practice, Ritchie writes. “By implying that complex phenomena have simple, singular causes and fixes, [they] contribute to an image of science as something it isn’t.” His own book offers a more sober account. Science Fictions presents a highly readable and competent description of the problems facing researchers in the 21st century, and it’s an excellent primer for anyone who wants to understand why and how science is failing to live up to its ideals.
At the same time, while Ritchie outlines some of the solutions that are being proposed, he offers very little about how these are being deployed and the challenges they’re facing. It’s a shame there’s no mention of projects within his own field, like the Psychological Science Accelerator, which facilitates collaboration between labs around the globe to increase the size and diversity of data sets. Ritchie’s field even has a whole organization, the Society for the Improvement of Psychological Science, that was formed to tackle issues like the ones he describes. There are rich stories to be told about the rise of a new cadre of researchers who are tackling these problems head-on, and the conflicts that arise from threats to the status quo—but those stories are beyond the scope of this book.
Ritchie’s ambition here is to convince the reader that science is not living up to its ideals, and in that he succeeds. Yet it’s not just the way scientists do science that needs revamping. The public’s view of science as a badge of unshakable truth could also use updating. This book illustrates the ways in which science is a fallible process for seeking truth.
That process can be difficult. Ritchie makes a point of acknowledging how hard it is to get things right, and the importance of correcting errors when they’re found. He puts his money where his mouth is, too, by offering a monetary reward to readers who alert him to objective errors in the book. It pains me to report there’s one in the book’s very first sentence—its first two words, even.
Here’s how Ritchie starts the preface: “January 31, 2011 was the day the world found out that undergraduate students have psychic powers.” He’s referencing a now-discredited paper by Daryl Bem that purported to show that ESP is real. Surely it would have been more accurate to say the world found this out on January 6, 2011, when The New York Times put out a front-page story about the finding; or maybe it was the day before, when that same story was posted to the newspaper’s website; or, at the very latest, it happened on January 27, 2011, when Bem discussed the work on a nationally televised episode of The Colbert Report.
So when did “the world” find out about Bem’s irreproducible result? It depends on how you define “the world” and “found out.” Did it happen the first time Bem talked to the media about his study? Or when it made the front page? Or was the true public unveiling when the paper was finally published in a journal? The exact day that Bem’s study came to the public’s attention isn’t crucial to the point that Ritchie is making here, yet the uncertainty itself may be illustrative. Even objective truths depend on human decisions and interpretations. Turns out that just as science is a social construct, so too is science criticism.
All Rights Reserved for STUART RITCHIE