Corruption, Falsity, and Fraud: The Epistemological Crisis of Professional Academic Research.

(SocialNeuro, 2023)


You can also download and read or share a .pdf of the complete text of this essay by scrolling to the bottom of the post and clicking on the Download tab.


Corruption, Falsity, and Fraud: The Epistemological Crisis of Professional Academic Research

1. Introduction

Professional academic psychological research illustrates well the problem of what should be called “corruption,” rather than a more polite “questionable research practices.” Such practices include, selective reporting, selective publication, manipulation of outlier data, and manipulating any flexibilities in data collection and analysis to produce a specific outcome, among other “sharp” practices (Fiedler & Schwarz, 2015). Since a number of these practices might be conducted at once, this is more than mere dishonesty, but is in fact fraud, an intentional misuse of research to obtain a desired result. In any case, such practices are almost universal, involving some degree of data  analysis “fiddling” (John et al., 2012; Ritchie, 2020). In this essay, we’ll consider some of the main varieties of professional academic research corruption.

2. Falsity             

In 2005 John Ioannidis published a paper with the confrontational title, “Why Most Published Research Findings are False” (Ioannidis, 2005), a paper which has since generated considerable controversy. Ioannidis said that it is more likely than not that research claims are false rather than true, and that the vast majority of published research papers are in fact false. He did not precisely specify the domain of “research papers” that were most likely false, but he did say that it was a vast number of fields from epidemiology, clinical medicine, and biomedical research, to molecular research. We will see that it is most scientific fields based upon empirical-statistical research.

Ioannidis used a Bayesian statistical model to demonstrate his falsity thesis. The model captured the idea that research findings are more likely to be false

when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical models; when there is greater financial and other interest and prejudices; and when more teams are involved in a scientific field in chase of statistical significance. (Ioannidis, 2005)

Therefore the presence of bias, low statistical power in most studies, and the small number of true hypotheses indicated by a Bayesian analysis, jointly entail that the majority of empirical studies are false. Others have argued, at least for psychological research, that flexibility in data collection, analysis, and final reporting, increases false-positive rates, so that it is easy to present statistically significant evidence for false hypotheses (Simmons, et al., 2011).

Ioannidis’s thesis has been, predictably enough, subjected to statistical critique (Goodman & Greenland, 2007), to which Ioannidis has responded (Ioannidis, 2007). Others have supported Ioannidis’s falsity thesis and extended it (Button et al., 2013; Doleman et al., 2019). As we see it, the major methodological issue that is not addressed by Ioannidis is: precisely which fields of research, or disciplines, have this largely false research? Presumably, given his Bayesian statistical argument, it is empirical fields, rather than logical-conceptual fields much as pure logic or pure mathematics. Does it include physics, cosmology, or chemistry? We propose that the fields are at least those that are subject to the so-called “replication crisis”; and in fact, the Ioannidis falsity thesis offers parsimonious explanation of why so many studies fail to be replicated: the studies are just false, period. We will also argue that the issue of professional academic fraud, when added to this, further strengthens the Ioannidis falsity thesis. In short, the professional academy has on its hands an epistemological crisis, by which we mean a fundamental foundational challenge to a large number of its purported knowledge claims.

3. Falsity Redux: The Replication Crisis

There is a large body of work on the replication crisis, namely, that published studies in the social sciences, especially psychology, but also in various areas of the life/biological sciences and medical research, such as cancer biology, have failed to be replicated. A 2016 poll in the journal Nature, of 1,576 scientists across a number of fields, reported that 70 percent of these scientists (87 percent of chemists, 77 percent of biologists, 69 percent of physicists, 67 percent of biomedical researchers, 64 percent of environmental and earth scientists), tried, but failed to reproduce scientific experiments, and that 50 percent failed to replicate their own experiments (Baker, 2016). Another study found that for 74 papers on breast cancer cell biology, less than one third, 22 papers, were reproducible (Roper et al., 2022).

Psychology most acutely faces the replication crisis due to the low statistical power of most studies (Stanley et al., 2018). The Reproducibility Project for psychology found that when 100 empirical studies for three top psychology journals were attempted to be replicated, less than half were replicable, and only 36 percent of replications had significant findings of p < 0.005, compared to 97 percent of the original studies (Open Science Collaboration, 2015). More recent research on replication in psychology found a similar rate of replication failure (Camerer et al., 2018; Witkowski, 2019).

No doubt part of the problem of replication comes from a misapplication of statistical methods. There has been a long controversy, over many decades, about the foundations of statistical methodology in the social sciences, especially regarding significance testing, that reported statistical significance claims in scientific publications are routinely mistaken. There are many articles dealing with this, and calls have been made in for “the entire concept of statistical significance to be abandoned” (Amrhein et al., 2019: p. 306) A special edition of The American Statistician, volume 73, 2019, was devoted to addressing this. Be that as it is, there are still replication problems with papers that do not use statistical significance tests, or other statistical methods that open to this critique. There are other causes, which we now discuss.

4. Corruption by Fraud

That brings us to the question of academic fraud. There has been media coverage this year of some major academics at top universities who are being accused of (alleged) fraud, or “manipulated research” (Baker, 2023; Lee, 2023). Retraction Watch catalogued almost 5,500 retractions in 2022, and estimated that retractions are one tenth of a percent of all published papers per year (Oransky & Marcus, 2023). Moreover, Retraction Watch believes that at least 100,000 retractions should occur each year; but the usual reason for this not occurring is that whistle-blowers fear getting sued, as is occurring at present in one US case. One estimate is that one in every 50 published papers exhibit evidence of data manipulation (Mannix, 2023).

John Carlisle, an anaesthetist at England’s National Health Service, has an interest in detecting fake or flawed clinical trial studies in medical journals. He examined 500 randomised control trials (RCT), and for over 150 RCT had access to individual participation data (IPD) (Van Noorden, 2023). He concluded that 44 percent of these trials had “at least some flawed data: impossible statistics, incorrect calculations or duplicated numbers or figures” (Van Noorden, 2023: p. 455) Carlisle found that 26 percent of articles had problems so severe as to render the study unreliable; authors were either incompetent or faked the data (Van Noorden, 2023, 455). He concluded: “I think journals should assume that all submitted papers are potentially flawed” (Van Noorden, 2023: p. 455). Describing this work, Van Noorden wrote that RCTs in medical fields ranging from pain research to COVID-19, “have found dozens or hundreds of trials with seemingly statistically impossible results,” with some estimates being that one third of all RCT are fabricated (Van Noorden, 2023: p. 455).  This would therefore destroy systematic reviews in medicine, as “garbage in; garbage out” (Adam, 2023). RCT are the gold standard of medicine, with lives often dependent upon the accurately of the experimental results, as in the trial of new drugs and vaccines.

Moreover, there is the major unsolved problem of “authorship-for-sale” and paper mills, where companies exist to manufacture articles that desperate authors can buy, in order to beef up their list of publications in the frantic “publish or perish” ideology that the contemporary professional academic university system has adopted as its criterion of professionalism. Bernhardt Sabel used his “fake paper detector” to screen 5,000 papers in biomedicine (Brainard, 2023). He found that about 34 percent of neuroscience papers published in 2020 were either plagiarized or fabricated, and in medicine, the figure is 24 percent. Sabel is quoted as saying: “It’s as if somebody tells you that 30 % of what you eat is toxic” (Brainard, 2023: p. 568). The figure may give a high false-positive rate, because the method used singled out use of private non-institutional email addresses, but this would not substantially alter the situation. As Brainard writes:

Journals are awash in a rising tide of scientific manuscripts from paper mills—secretive business that allow researchers to pad their publication records by paying for fake papers or undeserved authorship. (Brainard, 2023: p. 568)

Journals are trying to deal with floods of fake papers by various methods, but given the profits involved, as long as the mania of “publish or perish” rules professional academia, and this doctrine is at present totally entrenched, it will remain an uphill battle to combat this. Certainly, at the time of writing, all of these problems discussed here are unsolved, and probably intensifying each year, with worsening economic conditions, and crippling job uncertainty in academia.

Finally, there is the role of corporate money and its impact on professional academic research. As Jureidini and McHenry have noted, this is a major threat to any sort of objectivity in biomedical research:

medicine is largely dominated by a small number of very large pharmaceutical companies that compete for market share, but are effectively united in their efforts to expanding that market. (Jureidini & McHenry, 2022).

This results in the relentless drive for profits, and cutting corners, such as the suppression of negative trial results (or to keep secret the entire trial results under “commercial confidentiality”), and the failure to report adverse events, or to cover them up. Universities, which have swallowed the entire neo-liberal agenda, strive to also be corporations, so

university departments become instruments of industry: through company control of the research agenda and ghostwriting of medical journal articles and continuing medical education, academics become agents for the promotion of commercial products. (Jureidini & McHenry, 2022)

5. Conclusion

The scope of Ioannidis’s falsity thesis is therefore broader than one would initially suppose; and when the variables of falsity by non-replication and academic fraud are added to the other modes of corruption, it must engender a skepticism about all academic research that is based upon empirical data and its analysis. To be sure, there are philosophical concerns that can and have been raised about other disciplines and research that are based upon qualitative methods, including professional academic Analytic philosophy—as criticized in this journal, for example—and most disciplines in the humanities and the social sciences, such as sociology (Turner, 1986). But that is a long story for another day. What we conclude here is that a skeptical position should be adopted towards all empirical research in the areas in which there is a replication crisis, and in which fraud is significant. In other words, there is an epistemological crisis in professional academic work in these areas, that directly challenges the professionalism model of research.

One should therefore adopt what we’ll call the principle of corruption:all empirical research in the areas in which there is a replication crisis or the incidence of fraud is significant should be regarded with critical suspicion, by virtue of possible falsity, unreplicability, or fraud, until proven otherwise. This means, at a practical level, that we citizens should be very suspicious about travelling salespersons brandishing PhDs, in their fancy covered wagons, telling us that freshly brewed and steaming wonder potions, straight from the back of the wagon, will be theoretical magic bullets.

REFERENCES

(Adam, 2019). Adam, D. “The Data Detective.” Nature 571: 462-464.

(Armrhein, et al., 2019). Armrhein, V. et al. “Scientists Rise Up Against Statistical Significance.” Nature 567: 305-307.

(Baker, 2016). Baker, M. “1,500 Scientists Lift the Lid on Reproducibility.” Nature 533: 452-454.

(Baker, 2023). Baker, T. “Stanford President Resigns over Manipulated Research, Will Retract at Least Three Papers.” 1 July. The Stanford Daily. Available online at URL = <https://stanforddaily.com/2023/07/19/stanford-president-resigns-over-manipulated-research-will-retract-at-least-3-papers>.

(Brainard, 2023). Brainard, J. “New Tools Show Promise for Tackling Paper Mills.” Science 380, 6645: 568-569.  

(Button, et al., 2013). Button, K.S. et al. “Power Failure: Why Small Sample Size Undermines the Reliability of Neuroscience.” Nature Reviews Neuroscience 14, 5: 365-376.

(Camerer, et al., 2018). Camerer, C.F. et al. “Evaluating the Replicability of Social Science Experiments in Nature and Science between 2010 and 2015.” Nature Human Behaviour 2: 637-644.

(Doleman et al., 2019). Doleman, B. et al. “Why Most Published Meta-Analysis Findings are False.” Techniques in Coloproctology 23: 925-928.

(Fiedler & Schwarz, 2015). Fiedler, K. & Schwarz, N. “Questionable Research Practices Revisited.” Social Psychological and Personality Science 7: 45-52.

(Goodman & Greenland, 2007). Goodman, S. & Greenland, S. “Why Most Published Research Findings are False: Problems in the Analysis.” PLoS Medicine 4, 4: e165.

(Ioannidis, 2005). Ioannidis, J. “Why Most Published Research Findings are False.” PLoS Medicine 2, 8: e124.

(Ioannidis, 2007). Ioannidis, J. “Why Most Published Research Findings are False: Author’s Reply to Goodman and Greenland.” PLoS Medicine 4, 6: e214.

(John, et al., 2012). John, L.K. et al. “Measuring the Prevalence of Questionable Research Practices with Incentives for Truth Telling.” Psychological Science 23: 524-532.

(Jureidini & McHenry, 2022). Jureidini, J. & McHenry, L. “The Illusion of Evidence Based Medicine.” British Medical Journal 376: o702. Available online at URL = <https://www.bmj.com/content/376/bmj.o702>.

(Lee, 2023). Lee, S.M. “A Weird Research-Misconduct Scandal about Dishonesty Just got Weirder.” 16 July. Chronicle of Higher Education. Available online at URL = <https://www.chronicle.com/article/a-weird-research-misconduct-scandal-about-dishonesty-just-got-weirder>.

(Mannix, 2023). Mannix., L. “ ‘I Lose Sleep at Night’: Experts Fight to Expose Science Fraud in Australia.” Sydney Morning Herald. 27 June 27. Available online at URL = <https://www.smh.com.au/national/i-lose-sleep-at-night-experts-fight-to-expose-science-fraud-in-Australia>.

(Open Science Collaboration, 2015). Open Science Collaboration. “Estimating the Reproducibility of Psychological Science.” Science 349, 6251. Available online at URL = <https://www.science.org/doi/10.1126/science.aac4716>.

(Oransky & Marcus, 2023). Oransky, I. & Marcus, A., “There’s Far More Scientific Fraud than Anyone Wants to Admit.” The Guardian. 9 August. Available online at URL = <https://www.theguardian.com/commentisfree/2023/aug/09/scientific-misconduct-retraction-watch>.

(Ritchie, 2020). Ritchie, S., Science Fictions: How Fraud, Bias, Negligence and Hype Undermine the Search for Truth. New York: Metropolitan Books.

(Roper et al., 2022). Roper, K. et al. “Testing the Reproducibility and Robustness of Cancer Biology Literature by Robot.” Journal of the Royal Society Interface 19, 189. Available online at URL =  <https://royalsocietypublishing.org/doi/10.1098/rsif.2021.0821>.

(Simmons, et al., 2011). Simmons, J.P. et al. “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science 22, 11: 1359-1366.

(SocialNeuro, 2023). SocialNeuro. “Why Science Fraud Goes Deeper Than the Stanford Scandal.” 3 August. Available online at URL = <https://www.youtube.com/watch?app=desktop&v=2mWwXO_guHk>.

(Stanley, et al., 2018). Stanley, T.D. et al. “What Meta-Analyses Reveal about the Replicability of Psychological Research.” Psychological Bulletin 144: 1325-1346.

(Turner, 1986). Turner, B. “Sociology as an Academic Trade: Some Reflections on Centre and Periphery in the Sociology Market.” ANZJS 22, 2: 272-282.

(Van Noorden, 2023). Van Noorden, R. “How Many Clinical Trials Can’t Be Trusted?” Nature 619: 455-458.

(Witkowski, 2019). Witkowski, T. “Is the Glass Half Empty or Half Full? Latest Results in the Replication Crisis in Psychology.” Skeptical Inquirer 43: 5-6.



Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!