What Do We Learn From Formal Models Of Bad Science?

This abstract has open access
Abstract
The poor replicability of scientific results in psychology, the biomedical sciences, and other sciences is often explained by appealing to scientists’ incentives for productivity and impact: Scientific practices such as publication bias and p-hacking (which are often called “questionable research practices”) enable scientists to increase their productivity and impact at the cost of the replicability of scientific results. This influential and widely accepted explanatory hypothesis, which I call “the perverse-incentives hypothesis,” is attractive, in part because it embodies a familiar explanatory schema, used by philosophers and economists to explain many characteristics of science as well as, more broadly, the characteristics of many other social entities. The perverse-incentives hypothesis has given rise to intriguing and sometimes influential models in philosophy (in particular, Heesen, 2018, in press) and in metascience (in particular, Higginson & Munafò, 2016; Smaldino & McElreath, 2016; Grimes et al., 2018; and Tiokhin et al., 2021). In previous work, I have examined the empirical evidence for the perverse-incentives hypothesis, and concluded it was weak. In this presentation, my goal is to examine the formal models inspired by the perverse-incentives hypothesis critically. I will argue that they provide little information about the distal causes of the low replicability of psychology and other scientific disciplines, and that they fail to make a compelling case that low replicability is due to scientific incentives and the reward structure of science. Current models suffer from one of the three flaws (I will also argue that (1) to (3) are indeed modeling flaws): (1) They are empirically implausible, building on empirically dubious assumptions. (2) They are transparent: The results are transparently baked into the formal set-up. (3) They are ad hoc and lack robustness. Together with the review of the empirical literature on incentives and replicability, this discussion suggests that incentives only play a partial role in the low replicability of some sciences. We should thus look for complementary, and possibly alternative, factors. References Grimes, D. R., Bauch, C. T., & Ioannidis, J. P. (2018). Modelling science trustworthiness under publish or perish pressure. Royal Society Open Science, 5(1), 171511. Heesen, R. (2018). Why the reward structure of science makes reproducibility problems inevitable. The Journal of Philosophy, 115(12), 661-674. Heesen, R. (in press). Cumulative advantage and the incentive to commit fraud in science. The British Journal for the Philosophy of Science. Higginson, A. D., and Munafò, M. R. (2016). Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biology, 14(11), e2000995. Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society open science, 3(9), 160384. Tiokhin, L., Yan, M., & Morgan, T. J. (2021). Competition for priority harms the reliability of science, but reforms can help. Nature human behaviour, 1-11.
Abstract ID :
PSA2022120
Submission Type
University of Pittsburgh

Abstracts With Same Type

Abstract ID
Abstract Title
Abstract Topic
Submission Type
Primary Author
PSA2022227
Philosophy of Climate Science
Symposium
Prof. Michael Weisberg
PSA2022211
Philosophy of Physics - space and time
Symposium
Helen Meskhidze
PSA2022165
Philosophy of Physics - general / other
Symposium
Prof. Jill North
PSA2022218
Philosophy of Social Science
Symposium
Dr. Mikio Akagi
PSA2022263
Values in Science
Symposium
Dr. Kevin Elliott
PSA202234
Philosophy of Biology - general / other
Symposium
Mr. Charles Beasley
PSA20226
Philosophy of Psychology
Symposium
Ms. Sophia Crüwell
PSA2022216
Measurement
Symposium
Zee Perry
160 visits