Nothing Special   »   [go: up one dir, main page]

Academia.eduAcademia.edu
Published by HealthWatch www.healthwatch-uk.org THE UNBEARABLE ASYMMETRY OF BULLSHIT In this piece, Brian Earp discusses the problem of plausible-sounding bullshit in science, and describes one particularly insidious method for producing it. Because, he says, it takes so much more energy to refute bullshit than it does to create it, and because the result can be so damaging to the integrity of empirical research as well as to the policies that are based upon such research, Earp suggests that addressing this issue should be a high priority for publication ethics S CIENCE and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent concerns about sloppy research, small sample sizes, and challenges in replicating major findings1-3—concerns I share and which I have written about at length4-10—I still believe that the scientific method is the best available tool for getting at empirical truth.11 Or to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst tool, except for all the rest. In other words, science is flawed. And scientists are people too. But there is one example I have only recently come across, and While it is true that most scientists—at least the ones I know and of which I have not yet seen any serious discussion. I am referring work with—are hell-bent on getting things right, they are not thereto a certain sustained, long-term publication strategy, apparently fore immune from human foibles. If they want to keep their jobs, at deliberately carried out (although motivations can be hard to pin least, they must contend with a perverse ‘publish or perish’ incentive down), that results in a stupefying, and in my view dangerous, structure that tends to reward flashy findings and high-volume ‘propaper-pile of scientific bullshit. It can be hard to detect, at first, with ductivity’ over painstaking, reliable research.12 On top of that, they an untrained eye—you have to know your specific area of research have reputations to defend, egos to protect, and grants to pursue. They extremely well to begin to see it—but once you do catch on, it get tired. They get overwhelmed. They don’t always check their refbecomes impossible to un-see. erences, or even read what they cite.13 They have cognitive and emoI don’t know what to call this insidious tactic (although I will tional limitations, not to mention biases, like everyone else.14-16 describe it in just a moment). But I can identify its end result, which At the same time, as the psychologist Gary Marcus has recently I suspect researchers of every stripe will be able to recognize from put it,17 “it is facile to dismiss science itself. The most careful scitheir own sub-disciplines: it is the hyper-partisan and polarized,22-23 entists, and the best science journalists, realize that all science is but by all outward appearances, dispassionate and objective, ‘sysprovisional. There will always be things that we haven’t figured out tematic review’ of a controversial subject. yet, and even some that we get wrong.” But science is not just about To explain how this tactic works, I am going make up a hypoconclusions, he argues, which are occathetical researcher who engages in it, sionally (or even frequently)1 incorrect. “I suspect researchers of every stripe will be and walk you through his ‘process’, Instead, “It’s about a methodology for able to recognize it—the hyper-partisan and step by step. Let’s call this hypothetical investigation, which includes, at its polarized, but by all outward appearances, researcher Lord Voldemort. While core, a relentless drive towards ques- dispassionate and objective, ‘systematic everything I am about to say is based tioning that which came before.” You review’ of a controversial subject.” on actual events, and on the real-life can both “love science,” he concludes, behavior of actual researchers, I will “and question it.” not be citing any specific cases (to I agree with Marcus. In fact, I agree with him so much that I avoid the drama). Moreover, we should be very careful not to conwould like to go a step further: if you love science, you had better fuse Lord Voldemort for any particular individual. He is an amalquestion it, and question it well, so it can live up to its potential. gam of researchers who do this; he is fictional. And it is with that in mind that I bring up the subject of bullshit. In this story, Lord Voldemort is a prolific proponent of a certain There is a veritable truckload of bullshit in science.1* When I say controversial medical procedure, call it X, which many have argued bullshit, I mean arguments, data, publications, or even the official is both risky and unethical. It is unclear whether Lord Voldemort policies of scientific organizations that give every impression of has a financial stake in X, or some other potential conflict of interbeing perfectly reasonable—of being well-supported by the highest est. But in any event he is free to press his own opinion. The probquality of evidence, and so forth—but which don’t hold up when lem is that Lord Voldemort doesn’t play fair. In fact, he is so intent you scrutinize the details. Bullshit has the veneer of truth-like plauon defending this hypothetical intervention that he will stop at nothsibility. It looks good. It sounds right. But when you get right down ing to flood the literature with arguments and data that appear to to it, it stinks. weigh decisively in its favor. As the first step in his long-term strategy, he scans various scholHERE ARE many ways to produce scientific bullshit.18 One arly databases. If he sees any report of an empirical study that does way is to assert that something has been ‘proven’, ‘shown’, not put X in an unmitigatedly positive light, he dashes off a letteror ‘found’, and then cite, in support of this assertion, a study to-the-editor attacking the report on whatever imaginable grounds. that has actually been heavily critiqued (fairly and in good faith, let Sometimes he makes a fair point—after all, most studies do have us say, although that is not always the case, as we soon shall see) limitations (see above)—but often what he raises is a quibble, without acknowledging any of the published criticisms of the study couched in the language of an exposé. or otherwise grappling with its inherent limitations.19 These letters are not typically peer-reviewed (which is not to say Another way is to refer to evidence as being of ‘high quality’ simthat peer review is an especially effective quality control mechaply because it comes from an in-principle relatively strong study nism);24-25 instead, in most cases, they get a cursory once-over by an design, like a randomized control trial, without checking the specifeditor who is not a specialist in the area. Since journals tend to print ic materials that were used in the study to confirm that they were fit the letters they receive unless they are clearly incoherent or in some for purpose.20 There is also the problem of taking data that were way obviously out of line (and since Lord Voldemort has mastered the generated in one environment and applying them to a completely art of using ‘objective’ sounding scientific rhetoric26 to mask objecdifferent environment (without showing, or in some cases even tively weak arguments and data), they end up becoming a part of the attempting to show, that the two environments are analogous in the published record with every appearance of being legitimate critiques. right way).21 There are other examples I have explored in other conThe subterfuge does not end there. texts,18 and many of them are fairly well-known. T * There is a lot of non-bullshit in science, too! Earp BD. The unbearable asymmetry of bullshit. HealthWatch Newsletter 2016;101:4-5 Published by HealthWatch www.healthwatch-uk.org The next step is for our anti-hero to write a ‘systematic review’ at the end of the year (or, really, whenever he gets around to it). In it, He Who Shall Not Be Named predictably rejects all of the studies that do not support his position as being ‘fatally flawed,’ or as having been ‘refuted by experts’—namely, by himself and his close collaborators, typically citing their own contestable critiques— while at the same time he fails to find any flaws whatsoever in studies that make his pet procedure seem on balance beneficial. The result of this artful exercise is a heavily skewed benefit-torisk ratio in favor of X, which can now be cited by unsuspecting third-parties. Unless you know what Lord Voldemort is up to, that is, you won’t notice that the math has been rigged. S O WHY doesn’t somebody put a stop to all this? As a matter of fact, many have tried. More than once, the Lord Voldemorts of the world have been called out for their underhanded tactics, typically in the ‘author reply’ pieces rebutting their initial attacks. But rarely are these ripostes—constrained as they are by conventionally miniscule word limits, and buried as they are in some corner of the Internet—noticed, much less cited in the wider literature. Certainly, they are far less visible than the ‘systematic reviews’ churned out by Lord Voldemort and his ilk, which constitute a sort of ‘Gish Gallop’ that can be hard to defeat. The term ‘Gish Gallop’ is a useful one to know. It was coined by the science educator Eugenie Scott in the 1990s to describe the debating strategy of one Duane Gish.27 Gish was an American biochemist turned Young Earth creationist, who often invited mainstream evolutionary scientists to spar with him in public venues. In its original context, it meant to “spew forth torrents of error that the evolutionist hasn’t a prayer of refuting in the format of a debate.” It also referred to Gish’s apparent tendency to simply ignore objections raised by his opponents. A similar phenomenon can play out in debates in medicine. In the case of Lord Voldemort, the trick is to unleash so many fallacies, misrepresentations of evidence, and other misleading or erroneous statements—at such a pace, and with such little regard for the norms of careful scholarship and/or charitable academic discourse—that your opponents, who do, perhaps, feel bound by such norms, and who have better things to do with their time than to write rebuttals to each of your papers, face a dilemma. Either they can ignore you, or they can put their own research priorities on hold to try to combat the worst of your offenses. It’s a lose-lose situation. Ignore you, and you win by default. Engage you, and you win like the pig in the proverb who enjoys hanging out in the mud. As the programmer Alberto Brandolini is reputed to have said:28 “The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.” This is the unbearable asymmetry of bullshit I mentioned in my title, and it poses a serious problem for research integrity. Developing a strategy for overcoming it, I suggest, should be a top priority for publication ethics. Brian D Earp Visiting Scholar, The Hastings Center Bioethics Research Institute (Garrison, NY), and Research Associate, University of Oxford References 1. Ioannidis JP. Why most published research findings are false. PLoS Medicine 2005;2(8):e124 2. Button KS et al. Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience 2013;14(5):365-376 3. Open Science Collaboration. Estimating the reproducibility of psychological science. Science 2015;349(6251):aac4716 4. Earp BD, Trafimow D. Replication, falsification, and the crisis of confidence in social psychology. Frontiers in Psychology 2015;6(621):1-11 5. Earp BD et al. Out, damned spot: can the “Macbeth Effect” be replicated? Basic and Applied Social Psychology 2014;36(1):91-98 6. Earp BD. Psychology is not in crisis? Depends on what you mean by “crisis.” Huffington Post, 2 Sept 2015 http://www.huffingtonpost.com/brian-earp/psychology-is-not-incrisis_b_8077522.html 7. Earp BD, Everett JAC. How to fix psychology’s replication crisis. Chronicle of Higher Education, 25 Oct 2015 http://chronicle.com/article/How-to-Fix-Psychology-s/233857 8. Earp BD. Open review of the draft paper, “Replication initiatives will not salvage the trustworthiness of psychology” by James C Coyne. BMC Psychology, 2016 [in press] https://www.academia.edu/21711738/Open_review_of_the_draft_paper _entitled_Replication_initiatives_will_not_salvage_the_trustworthiness_of_psychology_by_James_C._Coyne 9. Everett JAC, Earp BD. A tragedy of the (academic) commons: interpreting the replication crisis in psychology as a social dilemma for earlycareer researchers. Frontiers in Psychology 2015;6(1152):1-4. 10. Trafimow D, Earp BD. Badly specified theories are not responsible for the replication crisis in psychology. Theory & Psychology 2016; [in press] https://www.academia.edu/18975122/Badly_specified_theories_are_not _responsible_for_the_replication_crisis_in_social_psychology 11. Earp BD. Can science tell us what’s objectively true? The New Collection 2011;6(1):1-9 12. Nosek BA et al. Scientific utopia II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science 2012;7(6):615-631 13. Rekdal OB. Academic urban legends. Social Studies of Science 2014;44(4):638-654 14. Peterson D. The baby factory: difficult research objects, disciplinary standards, and the production of statistical significance. Socius 2016 [in press] http://srd.sagepub.com/content/2/2378023115625071.full 15. Duarte JL et al. Political diversity will improve social psychological science. Behavioral and Brain Sciences 2015 [in press] http://emilkirkegaard.dk/en/wp-content/uploads/Political-DiversityWill-Improve-Social-Psychological-Science-1.pdf 16. Ball P. The trouble with scientists. Nautilus, 14 May 2015 http://nautil.us/issue/24/error/the-trouble-with-scientists 17. Marcus G. Science and its skeptics. The New Yorker, 6 Nov 2013 http://www.newyorker.com/tech/elements/science-and-its-skeptics 18. Earp BD. Mental shortcuts [unabridged version]. The Hastings Center Report 2016 [in press] https://www.researchgate.net/publication/292148550_Mental_shortcuts_unabridged 19. Ioannidis JP. Limitations are not properly acknowledged in the scientific literature. Journal of Clinical Epidemiology 2007;60(4):324-329 20. Earp BD. Sex and circumcision. American Journal of Bioethics 2015;15(2):43-45 21. Bundick S. Promoting infant male circumcision to reduce transmission of HIV: A flawed policy for the US. Health and Human Rights Journal Blog, 31 Aug 2009 http://www.hhrjournal.org/2009/08/promoting-infant-malecircumcision-to-reduce-transmission-of-hiv-a-flawed-policy-for-the-us/ 22. Ploug T, Holm S. Conflict of interest disclosure and the polarisation of scientific communities. Journal of Medical Ethics 2015;41(4):356-358. 23. Earp BD. Addressing polarisation in science. Journal of Medical Ethics 2015;41(9):782-784 24. Smith R. Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine 2006;99(4):178-182 25. Smith R. Classical peer review: an empty gun. Breast Cancer Research 2010;12(S4):1-4 26. Roland MC. Publish and perish: hedging and fraud in scientific discourse. EMBO Reports 2007;8(5):424-428 27. Scott E. Debates and the globetrotters. The Talk Origins Archive. 1994 http://www.talkorigins.org/faqs/debating/globetrotters.html 28. Brandolini A. The bullshit asymmetry principle. Lecture delivered at XP2014 in Rome and at ALE2014 in Krakow. 2014 http://www.slideshare.net/ziobrando/bulshit-asymmetry-principlelightning-talk. A modified version of this essay was published in the online magazine Quillette on February 15, 2016. Please note that the article as it appears here is the ‘original’ (i.e., the final and definitive version), and should therefore be referred to in case of any discrepancies. The author thanks Morgan Firestein and Diane O’Leary for feedback on an earlier draft of this manuscript. Earp BD. The unbearable asymmetry of bullshit. HealthWatch Newsletter 2016;101:4-5