Forbes
Nov 10, 2022 01:30 PM - 04:15 PM(America/New_York)
20221110T1330 20221110T1615 America/New_York Aetiology of the Replication Crisis

The replication crisis describes an ongoing phenomenon, particularly in the social and medical sciences, in which there is a high frequency of unsuccessful replications which has been a cause of deep concern in the fields in question. But how did we get here? What exactly is at issue in this "crisis"? Our symposium is broadly concerned with providing insights into the aetiology of the replication crisis, particularly in psychology. We look at this topic from historical, philosophical and metascientific perspectives: three talks focus on specific candidate explanations of the replication crisis or low replicability, and one talk examines the emerging field of metascience. We hope that our symposium provides interesting insights into the replication crisis and its aetiology as a topic at the cutting edge of contemporary science and philosophy of science, as well as providing a platform for further discussion.

Forbes PSA 2022 office@philsci.org

The replication crisis describes an ongoing phenomenon, particularly in the social and medical sciences, in which there is a high frequency of unsuccessful replications which has been a cause of deep concern in the fields in question. But how did we get here? What exactly is at issue in this "crisis"? Our symposium is broadly concerned with providing insights into the aetiology of the replication crisis, particularly in psychology. We look at this topic from historical, philosophical and metascientific perspectives: three talks focus on specific candidate explanations of the replication crisis or low replicability, and one talk examines the emerging field of metascience. We hope that our symposium provides interesting insights into the replication crisis and its aetiology as a topic at the cutting edge of contemporary science and philosophy of science, as well as providing a platform for further discussion.

The Psychologist’s Green ThumbView Abstract
Symposium 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
The ‘psychologist’s green thumb’ stands for the assertion that an experimenter needs an indeterminate set of subtle skills or “intuitive flair” (Baumeister, 2016) in order to be able to successfully show or replicate an effect. This argument is sometimes brought forward by authors whose work has failed to be replicated on independent replication attempts, to explain a lack of replicability. On the one hand, this argument presenting replication failure as a failure on the part of the replicator seems ad-hoc. The ‘failed’ replications are typically more highly powered, more transparently carried out, and better described than the corresponding original studies. And yet, the original authors argue that the problem lies not at all with the study or the effect but with the replicator’s skill. References to flair and a lack of experimenter skill as explaining replication failures have consistently been quickly rejected by meta-researchers and others connected to the reform movement. On the other hand, there are conditions under which the psychologist’s green thumb argument will be potentially compelling, as the generation of some scientific evidence does require something like a ‘green thumb’ (e.g., Kuhn, 1962). Furthermore, it is not clear how we can distinguish between a replication failure that is due to the absence of the effect and one due to lack of skill without knowing whether the replicator is skilled or whether there is an effect (Collins, 1992). The original author, having previously ‘found’ the effect, may claim to have skills the replicator lacks and thus be able to make this distinction. Moreover, failed replications may result in the explication of hidden auxiliary hypotheses representing tacit, ‘green thumb’ knowledge or skill, leading to productive advances through “operational analysis” (Feest, 2016). Therefore, the idea that one needs a certain skill set to be a ‘successful’ experimenter may be convincing and less ad-hoc. In this talk, I will argue that initial biased reasoning towards a desired result is often a more likely cause of low replicability, even in contexts where appeals to 'green thumb' tacit knowledge arguments are conceptually persuasive. I will begin by investigating the conditions under which the psychologist’s green thumb is a persuasive concept. I will come to the preliminary conclusion that if experimenter skill takes the form of tacit knowledge that is not or seemingly cannot be shared, then a replicator may appear to lack the psychologist’s green thumb. However, it is unclear whether alleged ‘green thumb’ tacit knowledge amounts to A) experimenter skill to find evidence of a true effect, or B) biased reasoning towards a desired result. Given metascientific evidence regarding publication bias and the widespread use of questionable research practices, B) is likely a better explanation for many replication failures than the psychologist’s green thumb. In the context of field-wide replication failures, ‘green thumb’ tacit knowledge is a red herring at best – what is really at stake here is the articulation of background assumptions. We should strive towards experimental processes that can be and are sufficiently described for reproducibility and in principle replicability. References Baumeister, R. F. (2016). Charting the future of social psychology on stormy seas: Winners, losers, and recommendations. Journal of Experimental Social Psychology, 66, 153-158. Collins, H. M. (1992). Changing order: Replication and induction in scientific practice. University of Chicago Press. Feest, U. (2016). The experimenters' regress reconsidered: Replication, tacit knowledge, and the dynamics of knowledge generation. Studies in History and Philosophy of Science Part A, 58, 34-45. Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.
Presenters Sophia Crüwell
University Of Cambridge
The Conceptual Origins of Metascience: Fashion, Revolution, or Spin-off?View Abstract
Symposium 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
Ten years into the replication crisis, many scientists are experiencing a deep sense of worry and scepticism. In reaction to this problem, an optimistic wave of researchers has taken the lead, turning their scientific eyes onto science itself, with the aim of making science better. These metascientists have made progress studying causes of the crisis and proposing solutions. They have identified questionable research practices and bad statistics as potential culprits (Simmons et al., 2011, John et al., 2012). They have defended statistical (Cumming, 2012; Lee and Wagenmakers, 2013) and publication reforms (Chambers, 2013; Vazire, 2015) as solutions. Lastly, they are designing technological tools (benefiting from developments in related fields such as data science, machine learning, and complexity science) to support such reforms. The term metascience precedes the replication crisis. However, only now metascience is becoming institutionalised: there is an increasing community of practitioners, societies, conferences, and research centres. This institutionalisation and its perils require philosophical attention. It is worth stepping back and asking foundational questions about it. How did metascience emerge? Where does the novelty of metascience lie? How does metascience relate to other fields that take science as their subject matter? This talk focuses on the conceptual origins of metascience. I explore three different models of discipline creation and change, and seek to understand whether they can make sense of the emergence of metascience. (1) First, on the sociological model, the emergence of metascience does not obey merely epistemic needs, and can also be explained as a fashion (e.g., Crane, 1969). (2) By contrast, on the Kunhian model (1970), metascience can be viewed as a scientific revolution (a term that metascientists sometimes use) that is necessary to move beyond a period of crisis. (3) Finally, on the spin-off model, similarly to how physics branched out from natural philosophy, metascience could become the natural successor of disciplines such as history and philosophy of science. After examining these models, I suggest that we should challenge the increasingly popular perception of metascience as a fully authoritative field, in particular, when it comes to understanding the causes of the replication crisis and finding its solutions. References Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49, 609–610. Crane, D. (1969). Fashion in Science: Does It Exist? Social Problems, 16(4), 433–441. Cumming, G. (2012). Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-analysis. Multivariate applications book series. Routledge. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524–532 Kuhn, Thomas S (1970). The Structure of Scientific Revolutions. Chicago: University of Chicago Press. Lee, M. D. & Wagenmakers, E-J. (2013). Bayesian Cognitive Modeling: A Practical Course. Cambridge: Cambridge University Press. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. doi:10.1177/0956797611417632 Vazire, S. (2015). Editorial. Social Psychological & Personality Science,7, 3–7.
Presenters
FR
Felipe Romero
University Of Groningen
What is the Replication Crisis a Crisis Of?View Abstract
Symposium 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
While many by now acknowledge that wide-spread replication failures are indicative of a crisis in psychology, there is less agreement about questions such as (a) what this “replication crisis” is a crisis of, precisely (i.e., whether it is really, at heart, a crisis of replication) and (b) what socio-historical factors have contributed (and continue to contribute) to its existence. One standard answer in the literature is that replication failures are often due to questionable research practices in the original studies (p-hacking, retroactive hypothesis-fitting, etc.) (Simmons et al 2011), in turn giving rise to hypotheses about the institutional structures (e.g., incentive structures) that may be responsible for such practices. More recently, others have argued that the narrow focus on (the replicability of) experimental effects is itself part of a larger problem, namely a relative sparsity of sustained theoretical work in psychology. In turn, this has given rise to some efforts to develop methodologies of theory-construction (e.g., fried, 2020; van Roji & Baggio 2021). Both of these discussions make valuable contributions to a fuller understanding of the crisis. However, in my talk I will argue that there is a missing link here, having to do with questions about the very subject matter of psychology. What is missing in both types of analyses (i.e., those that focus on flaws in statistical and theoretical procedures) is a discussion of what (kinds of things) can be objects of psychological research, such that (1) we can generate (and perhaps even replicate) experimental effects pertaining to them, and (2) we can try to construct theories about them. In making psychological objects the focal point of my analysis, I follow a recent suggestion by Jill Morawski (2021), who notes that different responses to the replication-crisis reveal different underlying notions of the objects under investigation. Thus, she argues that “some researchers assume objects to be stable and singular while others posit them to be dynamic and complex” (Morawski 2021, 1). After clarifying my understanding of the psychological subject matter, I will come down in favor of an understanding of psychological objects as complex and dynamic, i.e., as multi-track capacities of individuals, which can be moderated by a large number of factors, both person-specific and environmental. With this in mind, we should expect experimental effects to be sensitive to small changes in experimental settings and, thus, be hard to replicate. My point is not that we should throw up our hands in the face of the inevitability of replication failures but rather that we need to recognize that the context-sensitivity of psychological objects is itself worthy of experimental study and that replication failures can provide valuable insights in this regard (see also Feest in press). In making this point, I am pushing for a revival of more “ecological” approaches to psychology (as was present, for example, in early 20th-century functionalism). In this vein, I will trace the current crisis, in part, to (i) a lack of attention to psychological objects in general and (ii) to a failure to appreciate the complexity and embeddedness of psychological objects. With regard to etiology, this analysis suggests the following two questions, i.e., first, why did parts of psychology get so fixated on effects as their objects, and second, why did parts of psychology get so fixated on cognitive systems in isolation from their environments? I will provide sketches of some historical answers to these questions. References Feest, Uljana (2022), Data Quality, Experimental Artifacts, and the Reactivity of the Psychological Subject Matter. European Journal for the Philosophy of Science (in press). Fried, Eiko I. (2020, February 7), Lack of theory building and testing impedes progress in the factor and network literature. https://doi.org/10.31234/osf.io/zg84s Morawski, Jill (2021), How to True Psychology’s Objects. Review of General Psychology. https://doi.org/10.1177/10892680211046518 Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366. van Rooij, Iris & Baggio, G. (2021). Theory before the test: How to build high-verisimilitude explanatory theories in psychological science. Perspectives on Psychological Science. https://journals.sagepub.com/doi/full/10.1177/1745691620970604
Presenters Uljana Feest
Leibniz Universität Hannover
What Do We Learn From Formal Models Of Bad Science?View Abstract
Symposium 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
The poor replicability of scientific results in psychology, the biomedical sciences, and other sciences is often explained by appealing to scientists’ incentives for productivity and impact: Scientific practices such as publication bias and p-hacking (which are often called “questionable research practices”) enable scientists to increase their productivity and impact at the cost of the replicability of scientific results. This influential and widely accepted explanatory hypothesis, which I call “the perverse-incentives hypothesis,” is attractive, in part because it embodies a familiar explanatory schema, used by philosophers and economists to explain many characteristics of science as well as, more broadly, the characteristics of many other social entities. The perverse-incentives hypothesis has given rise to intriguing and sometimes influential models in philosophy (in particular, Heesen, 2018, in press) and in metascience (in particular, Higginson & Munafò, 2016; Smaldino & McElreath, 2016; Grimes et al., 2018; and Tiokhin et al., 2021). In previous work, I have examined the empirical evidence for the perverse-incentives hypothesis, and concluded it was weak. In this presentation, my goal is to examine the formal models inspired by the perverse-incentives hypothesis critically. I will argue that they provide little information about the distal causes of the low replicability of psychology and other scientific disciplines, and that they fail to make a compelling case that low replicability is due to scientific incentives and the reward structure of science. Current models suffer from one of the three flaws (I will also argue that (1) to (3) are indeed modeling flaws): (1) They are empirically implausible, building on empirically dubious assumptions. (2) They are transparent: The results are transparently baked into the formal set-up. (3) They are ad hoc and lack robustness. Together with the review of the empirical literature on incentives and replicability, this discussion suggests that incentives only play a partial role in the low replicability of some sciences. We should thus look for complementary, and possibly alternative, factors. References Grimes, D. R., Bauch, C. T., & Ioannidis, J. P. (2018). Modelling science trustworthiness under publish or perish pressure. Royal Society Open Science, 5(1), 171511. Heesen, R. (2018). Why the reward structure of science makes reproducibility problems inevitable. The Journal of Philosophy, 115(12), 661-674. Heesen, R. (in press). Cumulative advantage and the incentive to commit fraud in science. The British Journal for the Philosophy of Science. Higginson, A. D., and Munafò, M. R. (2016). Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biology, 14(11), e2000995. Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society open science, 3(9), 160384. Tiokhin, L., Yan, M., & Morgan, T. J. (2021). Competition for priority harms the reliability of science, but reforms can help. Nature human behaviour, 1-11.
Presenters
EM
Edouard Machery
University Of Pittsburgh
University of Pittsburgh
University of Cambridge
Leibniz Universität Hannover
University of Groningen
University of Minnesota
No attendee has checked-in to this session!
Program Navigator
575 visits