Day 1, Nov 10, 2022 | |||
08:00AM - 06:00PM Kings Terrace | Nursing Room | ||
08:00AM - 06:00PM Kings Plaza | Childcare Room | ||
08:30AM - 10:00AM Duquesne | ISHPSSB Session: Rethinking the conceptual foundations of lineages in the light of contemporary biology Speakers
Adrian Stencel
Gaëlle Pontarotti, IHPST
Sophie Juliane Veigl, University Of Vienna
Matt Haber, University Of Utah
Robert Kok, University Of Utah
Javier Suárez, Jagiellonian University In Krakow
Moderators
Javier Suárez, Jagiellonian University In Krakow International Society for the History, Philosophy, and Social Studies of Biology Session. This symposium will explore the conceptual foundations of biological lineages in the light of contemporary biology. Particularly, we will study the extent to which phenomena like genealogical discordance, incomplete lineage sorting, developmental trait evolution, small RNA inheritance, extended inheritance, horizontal gene transfer, and symbiosis relationships challenge the idea that lineages ought to be conceived monistically (i.e., as a single concept). The different parts of this session will explore various consequences of adopting this stance. That includes analysing epistemological and ontological challenges associated with multiple discordant lineages questioning the very necessity of the concept of lineages for conceiving of biological reproduction, and suggesting an array of new perspectives for thinking about inheritance, reproduction and biological evolution. | ||
08:30AM - 10:00AM Sterlings 1 | SRPoiSE Session: Engaging science with philosophy: Case studies, empirical data, and reflections on collaborations between philosophers and scientists Speakers
Sara Doody
Luis Favela, Speaker, University Of Central Florida
Carol Cleland
Jackie Sullivan, Associate Professor, The University Of Western Ontario
Katie Plaisance, Associate Professor, University Of Waterloo
Moderators
Carla Fehr The Consortium for Socially Relevant Philosophy of/in Science and Engineering Session. Recent years have seen increasing engagement between philosophers of science and researchers in science, technology, engineering, and mathematics (STEM). Several philosophers have explicitly demonstrated the philosophical, scientific, and social benefits of such engagement. However, most philosophers of science are not trained how to collaborate with scientists or engineers. In other words, we currently lack in-depth, actionable, and transferable knowledge for making these collaborations work. This session helps to address this gap by offering case studies and reflections from philosophers "in the trenches" with STEM researchers, as well as presenting new empirical data on scientists' and engineers' experiences collaborating with philosophers. | ||
08:30AM - 10:00AM Smithfield | HOPOS Session: Constitutive Principles in Scientific Theory and Practice Speakers
Michele Luchetti
David Stump, University Of San Francisco
Flavia Padovani, Presenter, Drexel University
Moderators
David Stump, University Of San Francisco Recent literature in philosophy of science has emphasized the idea that some principles play a foundational, constitutive role within the scientific framework in which they operate, providing its conditions of possibility. Following Reichenbach's original interpretation, constitutive principles are regarded as contingent, cognitive principles that must be assumed as preconditions of empirical statements, allowing for the coordination of formal structures with their empirical correlates. This idea of constitutive principles and the connected notion of "coordination" has ramified in interesting ways into different areas of the philosophy of science, especially, but not only, in discussions related to measurement and representation (e.g., van Fraassen 2008). This session explores the idea of constitutive principles in general, starting from its origins, and also by bridging discussions on cognitive constitution with recent developments in the mechanistic literature on phenomena reconstitution. David J. Stump, University of San Francisco - What are Constitutive Principles and What Should They Be? Michele Luchetti, MPIWG and Flavia Padovani, Drexel University - Constituting Phenomena: Cognitive vs Mechanistic Constitution? | ||
08:30AM - 10:00AM Fort Pitt | SPP Session: The metaphysics of computation Speakers
L. A. Paul
Josh Tenenbaum
Jonathan Schaffer
Brian Scholl
Moderators
Frances Egan, Rutgers University The premise (and promise) of cognitive science is that we will come to understand ourselves better by integrating the insights and contributions from multiple fields of inquiry. This interdisciplinary project has been especially vibrant when it has explored the intersection of philosophy and psychology (for example when work in ethics integrates empirical work from moral psychology, or when work in the philosophy of mind integrates neuroscientific studies of consciousness). But cognitive science has interacted far less with metaphysics - the philosophical exploration of the fundamental nature of reality. This may seem surprising, since there has been a great deal of fascinating empirical research on the mental representations and cognitive processes involved in such topics. Accordingly, this panel will attempt to bridge this gap, with a special focus on the metaphysical reality of higher-level computational explanation, exploring different levels of analysis and ontology involving computation.The interest in computation in particular derives from David Marr's view, deeply influential in contemporary computational research, that any information processing system can be analyzed at three levels, (1) the computational problem the system is solving; (2) the algorithm the system uses to solve that problem; and (3) how that algorithm is implemented in the physical hardware of the system. But what is (1), the computational level of analysis? And how is it related to (2) and (3), the algorithmic level and the physical hardware of the system? We see parallels here to classic debates about levels of explanation and ontology in philosophy of science, as well as to work in the metaphysics and science of consciousness and mind. | ||
08:30AM - 10:00AM Benedum | IPMR Session: Treatment evaluation: new perspectives from the bench to bedside and beyond Speakers
Simon Okholm, University Of Bordeaux
Anne-Marie Gagné-Julien, Postdoctoral Fellow, McGill University
Jacob Stegenga, University Of Cambridge
Hamed Tabatabaei Ghomi, University Of Cambridge
Julian Reiss, Johannes Kepler University Linz
Moderators
Simon Okholm, University Of Bordeaux International Philosophy of Medicine Roundtable Session. Treatment, medical intervention, and therapy all refer to remedies to target health problems, and they are at the center of attention of many biomedical subdisciplines, from chemistry to public health, from pharmacology to psychiatry. Philosophers of medicine have explored many important issues related to treatment effectiveness, its personalization, the reducibility of complex health problems to simple interventions, and the implications (i.e., stigmatize, pathologize) of medicalizing via treatments conditions that were once thought not to lie within medicine's realm. Yet, many other epistemological, clinical, societal, and political issues remain largely unexplored. To stimulate new directions of research, this symposium brings together a diverse group of philosophers, centered around the evaluation of treatments from bench to the bedside, and beyond: How does a 'new' treatment emerge from basic research in science? How are we to evaluate a treatment's causal effects from a clinician's point of view and in relationship to other types of treatment that target the same health problem, but from a different disciplinary perspective? In the broader sociopolitical context, should we institutionally regulate pharmaceutical treatments to be tailored around patient autonomy or paternalism? | ||
08:30AM - 10:00AM Birmingham | INEM Session: Towards a methodology of validation in economics Speakers
Maria Jiménez-Buedo, Universidad Nacional De Educación A Distancia
Alessio Moneta, Sant'Anna School Of Advanced Studies, Pisa
Sebastiaan Tieleman, Speaker/Session Organizer , Utrecht University
Donal Khosrowi, Leibniz University Hannover
Moderators
Sebastiaan Tieleman, Speaker/Session Organizer , Utrecht University International Network for Economic Method Session. Validation is defined as the assessment of the ability of a particular epistemic tool in providing a correct answer to a particular question. Epistemic tools in economics include various types of models and experiments. While for the validation of econometric models there exists a substantial literature, a systematic discussion of the validation of simulations and agent-based models is still lacking. Similarly, there exists a rich literature on internal validity of experiments, but discussions on a systematic approach towards external validity are sparse. In practice, a multitude of strategies, techniques and methods are used to establish validity. A discussion on how these accounts overlap and differ could contribute to a more general validation methodology. A more general validation methodology leads to a more fundamental understanding of science in practice. The purpose of this session is to bring methodological studies of different epistemic tools together to discuss these differences and similarities among the various validation strategies. The papers to be presented are in line with this purpose. The first three focus, respectively, on validating macroeconomic models, economic experiments, and agent-based models. Building on this, the fourth paper will present a more general framework of validation in economics. | ||
08:30AM - 10:00AM Sterlings 3 | ISPC Session: New trends in the philosophy of chemistry Speakers
Hernán Accorinti
Karoliina Pulkkinen
Eric Scerri, University Of California Los Angeles
Juan Camilo Martínez González , National Council Of Scientific And Technological Research
Moderators
Eric Scerri, University Of California Los Angeles International Society for the Philosophy of Chemistry Session: It is generally accepted that the philosophy of science of the twentieth century was modeled in the image of theoretical physics, a perspective that influenced the way classic topics such as realism, reduction, explanation, and modeling have been approached. More recently the philosophy of biology began to challenge this situation. Even more recently, namely in the 1990s, the emergence of the philosophy of chemistry has enhanced this challenge and has widened the approach to those topics. The consideration of chemical theories and practices reveals that the traditional conception of science was partial and limited. An appeal to chemistry provides valuable new perspectives to our attempts to understand the nature of science. | ||
09:00AM - 11:45AM Sterlings 2 | Pursuing Local Public Engagement as a Philosopher of Science Speakers
Alison Wylie, Presidential Address, Symposium Chair, University Of British Columbia
Kristen Intemann, Presenter, Montana State University
Heather Douglas, Presenter, Michigan State University
Evelyn Brister, Rochester Institute Of Technology
Moderators
Melissa Jacquart, University Of Cincinnati
Angela Potochnik, University Of Cincinnati Presented by Angela Potochnik and Melissa Jacquart from the University of Cincinnati Center for Public Engagement with Science (PEWS)9:00am Panel discussion: Case studies of local public engagement Panelists: Evelyn Brister, Heather Douglas, Kristen Intemann, and Alison Wylie10:15am Guided exercise in developing an outreach project | ||
10:00AM - 10:15AM | Coffee Break | ||
10:15AM - 11:45AM Sterlings 1 | MAP: Indigenous and Non-Western approaches to Philosophy of Science Speakers
Alessandro Ramón Moscarítolo Palacio, Hamilton College
Federica Bocchi, Presenter, Boston University MAP International proposes a session on Indigenous and non-Western philosophy of science.This session invited philosophers to share research on this topic broadly and suggested papertopics such Indigenous land ethics, Traditional Ecological Knowledge (TEK), Non-Westernphilosophy of science fiction, right attribution to environmental entities. The session will includepapers that discuss the nature of values within science and it will highlight the importance andvalue of non-Western approaches. The session will include a presentation and Q&A withKeynote speaker, Shelbi Nahwilet Meissner. | ||
10:15AM - 11:45AM Smithfield | &HPS Session: Integrating HPS through P-Narratives and H-Narratives Speakers
Sharon Crasnow
Robert Meunier, Speaker, Universität Zu Lübeck
John Huss, Presenter, The University Of Akron
Mary Morgan, London School Of Economics
Moderators
Lydia Patton, Virginia Tech Committee for Integrated HPS Session. Philosophers of science and of history have long been engaged with shared matters: debating the differences between historical explanation and scientific explanation, understanding the sources of scientists' claims to knowledge, and investigating the development of valid scientific methods. This session is concerned with integrating our treatments of narratives in science. We tell historical or H-narratives about the science/scientists we study, but we also refer to the narratives that the scientists tell themselves while they are working, including narratives they recount about phenomena in their fields. Philosophers may analyse scientists' narratives in philosophical terms (P-narratives) and develop our own narrative accounts of scientific 'progress' or change. This session aims to open up for analysis the relationship between H- and P- narratives in integrated history and philosophy of science. It does so primarily by focussing on the nature of 'research narratives' (the narratives that scientists tell formally or informally about their own research work) and their 'narratives of nature' (the narratives they tell of what happens in the phenomena they are investigating). This useful distinction drawn and labelled by Robert Meunier (2022) offers multiple possibilities for integrating philosophical and historical accounts in our own narratives about those sciences/scientists. | ||
10:15AM - 11:45AM Duquesne | SPSP Session: Beyond Incompleteness: New Perspectives on Fossil Data Speakers
Douglas Erwin, Speaker, Smithsonian National Museum Of Natural History
Adrian Currie, University Of Exeter
Judyth Sassoon, PGR, University Of Exeter
Caitlin Wylie, Speaker, University Of Virginia
Aja Watkins, PhD Candidate, Boston University
Meghan Page
Moderators
Meghan Page Society for the Philosophy of Science in Practice Session. This symposium brings together an interdisciplinary panel to explore novel approaches to the production of knowledge through fossil-based paleontological practices. In contrast to previous philosophical focus on incompleteness of the fossil record, we emphasize that fossils are material objects, prepared and constructed towards paleontological goals. Towards this, Watkins brings recent work in the philosophy of data to bear on fossils, arguing that they should be understood as data models, while Wylie uses her ethnographic research on fossil preparators to apply the relational view of data to the case of fossils. Erwin considers how the kinds of questions paleontologists ask of fossil data have themselves been revised over time. As Currie argues, this suggests paleontology should be understood as a 'fossil-driven' practice, where the availability of, perspectives on, and analyses of fossils fundamentally shape paleontological knowledge production. Taken together, these perspectives both suggest a new picture of how practical activities and goals shape fossil data in particular, and further elucidate our understanding of scientific data more generally. To conclude, Page will offer commentary on the collected talks before leading a panel-style discussion period. | ||
10:15AM - 11:45AM Fort Pitt | AAPP Session: Psychiatric Practice and Philosophy of Psychiatry Speakers
Peter Zachar, Presenter And Session Chair, Auburn University Montgomery
Sarah Arnaud, The University Of Western Ontario
Jonathan Y. Tsou, Professor Of Philosophy, University Of Texas At Dallas
Anne-Marie Gagné-Julien, Postdoctoral Fellow, McGill University
Serife Tekin, Poster Chair, University Of Texas At San Antonio
Moderators
Miriam Solomon, Temple University Association for the Advancement of Philosophy and Psychiatry Session. Recent research in philosophy of psychiatry explores how psychiatric practices and the contexts surrounding psychiatry bear on philosophy of science issues, such as the role of values in science, the classification of natural kinds, and epistemic injustice. This session brings together four papers by members of the Association for the Advancement in Philosophy and Psychiatry (AAPP) that offer philosophical arguments that engage with psychiatric practices and the (social and practical) contexts surrounding psychiatry. An interdisciplinary group of junior and senior scholars-including both philosophers and psychologists-discuss cutting edge debates that focus on the role of mental health activism in psychiatric research and treatment, patient participation in psychiatric research, and the classification of mental disorders in the DSM. | ||
10:15AM - 11:45AM Birmingham | PoSSRT Session: Social Scientific Models and Political Philosophy Speakers
Kirun Sankaran, UNC Chapel Hill
Alexander Schaefer
Sahar Heydari Fard, The Ohio State University
Moderators
Nadia Ruiz, Postdoctoral Fellow , Stanford University Philosophy of Social Science Roundtable Session. This panel presents a series of talks centered on how social scientific models shape our thinking about political philosophy. Sahar Heydari Fard deploys network models to explore pathways toward emancipatory social change. She finds that intervening on network topology is a promising way forward. Alexander Schaefer argues against thinking about justice in terms of stable equilibrium models. Real social-political systems are sufficiently complex to undermine arguments in favor of stability. Kirun Sankaran relies on Mark Wilson's theory of concepts to explore what gets lost when political philosophers export standard models of agents in decision problems. He highlights the way this has distorted thinking about power. | ||
10:15AM - 11:45AM Sterlings 3 | SMS Session: Exploring Compositional Levels, Explanation and Reduction in the Sciences Speakers
Erica Onnis, Presenter, University Of Turin/RWTH Aachen University
Carl Gillett, Presenter, Northern Illinois University
Ronald Endicott, Presenter, North Carolina State University
Moderators
Kerry Mckenzie, Presenter , University Of California, San Diego Society for the Metaphysics of Science Session. Pluralist approaches defend kinds of models/explanations beyond the causal and mechanistic ones endorsed by many philosophers of science, including models/explanations backed by constitutive/compositional relations between entities in nature. Exciting new philosophical projects consequently arise focused on understanding compositional models/explanations and connected phenomena such as the compositional "levels", "downward causation" and "constraining relationships", or "reduction" and "emergence" that working scientists routinely discuss alongside such models/explanations. There are presently two radically different approaches to the latter topics. Within philosophy of science, the most popular approach is a "Pessimistic" one that doubles-down on the claim that there are only causal or mechanistic models/explanations and seeks to reconstruct, or replace, these scientific phenomena within that framework. In contrast, a minority, but growing, "Optimistic" approach seeks to understand levels, downward causation/constraint, or reduction and emergence, in the sciences against the background of a plural range of models/explanations, including compositional ones. The papers of the symposium work within the minority approach to broaden discussions in philosophy of science by providing Optimistic accounts of compositional levels, notions of downward causation/constraint, and the nature of reduction in the sciences. | ||
10:15AM - 11:45AM Benedum | AAPT-PSA Teaching Hub Speakers
Alexandra Bradner, Presenter, Kenyon College
Amanda Corris, Session Chair; Presenter, Wake Forest University
Paul Franco, University Of Washington
Moderators
Amanda Corris, Session Chair; Presenter, Wake Forest University The American Association of Philosophy Teachers (AAPT) focuses on the advancement of the art of teaching philosophy. "Teaching Hubs" introduce learner-centered teaching discussions and resources to a wide audience of philosophy instructors. This session seeks to bring AAPT-style pedagogy to the teaching of philosophy of science. The AAPT-PSA Teaching Hub is a series of interactive workshops designed specifically for philosophers of science and created to celebrate teaching within the context of the PSA biannual meetings. Organized by the AAPT, the Teaching Hub aims to offer a range of high-quality and inclusive development opportunities that address the teaching of philosophy of science at pre-college through to the graduate school level.This session addresses the following themes in the design of philosophy of science courses: (1) incorporating philosophy of science topics in introductory philosophy courses that fulfill general education requirements, teaching scientific uncertainty and how scientists communicate with the public, and (2) redesigning bioethics courses to meet best practices in Universal Design for Learning and trauma-informed pedagogy (best practices that are transferable to other courses). | ||
12:00 Noon - 01:15PM Virtual Room | Lunch (Interest Groups) Interest Group Lunch - Please note that these lunches are not subsidized by the PSA and do require prior registration to attend. Click to register.Philosophy of Medicine Host: Jonathan Fuller Location: Vallozzi's PittsburghCapacity: 20Come and talk philosophy of medicine and philosophy of psychiatry! Organized by the International Philosophy of Medicine Roundtable and the new journal Philosophy of Medicine. | ||
01:30PM - 04:15PM Benedum | Race: Scientific Methodology and Social Impact Speakers
Kareem Khalifa, Co-Author, UCLA
Daniel Malinsky, Columbia University
Alexander Tolbert, Presenter, University Of Pennsylvania
Quayshawn Spencer, University Of Pennsylvania
Naftali Weinberger, Reviewer, Munich
Liam Kofi Bright, Presenter, London School Of Economics
Michael Kearns, University Of Pennsylvania
Aaron Roth, University Of Pennsylvania
Moderators
Kareem Khalifa, Co-Author, UCLA Many normative questions about race have been addressed by social and political philosophers. These philosophers use philosophical approaches quite distinct from those found in the philosophy of science. The merits of these approaches notwithstanding, the complexity of social phenomena involving race raises several methodological issues that philosophers of science are well-positioned to address. For instance, the causal status of race has wide-reaching implications for which interventions can be pursued to mitigate racial injustice. Drawing from methodological discussions about causal modeling, statistical testing, and machine learning, this symposium highlights how philosophers of science can contribute more substantially to these normative issues. Race-based policymaking, police discrimination, algorithmic fairness, and racial disparities in healthcare are the chief issues that are discussed. Against Racial Monism 01:30PM - 04:15PM
Presented by :
Quayshawn Spencer, University Of Pennsylvania Recent work in the metaphysics of race that’s focused on the nature and reality of race as understood in the dominant race talk of current American English speakers—hereafter US race talk—has produced three main categories of race theories. Biological anti-realists—like Appiah (1992), Blum (2002), and Glasgow (2009)—have argued that, in US race talk, race is an unreal biological entity. Biological realists—like Outlaw (1996), Levin (2002), Spencer (2014), and Hardimon (2017)—have argued that, in US race talk, race is a real biological entity. NonBiological realists—like Haslanger (2012), Taylor (2013), and Ásta (2017)—have argued that, in US race talk, race is a real non-biological entity. However, after decades of arguing, metaphysicians of race haven’t yet developed a US race theory that’s close to being empirically adequate (in van Fraassen’s sense). While it’s admittedly very difficult for any theory to obtain empirical adequacy, the extent of the empirical inadequacies among the US race theories so far proposed suggests that there’s a systematic error in our metaphysical theorizing about race. That error, I submit, is the metametaphysical presupposition that there’s a single essence of race to be found in US race talk, which is a presupposition I’ll call essence monism about race. Essence monism about race is one example of racial monism. In contrast, I’ll argue that there’s a plurality of essences for race in US race talk, which is a view I’ll call essence pluralism about race. After defending my argument and addressing objections, I’ll explore interesting implications of the view, such as a novel perspective on how to address unjust racial disparities in health. Bias Bounty 01:30PM - 04:15PM
Presented by :
Michael Kearns, University Of Pennsylvania
Aaron Roth, University Of Pennsylvania Notions of fair machine learning that seek to control various kinds of error across protected groups generally are cast as constrained optimization problems over a fixed model class. For all such problems, tradeoffs arise: asking for various kinds of technical fairness requires compromising on overall error, and adding more protected groups increases error rates across all groups. Our goal is to “break though” such accuracy-fairness tradeoffs, also known as Pareto frontiers. We develop a simple algorithmic framework that allows us to deploy models and then revise them dynamically when groups are discovered on which the error rate is suboptimal. Protected groups do not need to be specified ahead of time: At any point, if it is discovered that there is some group on which our current model is performing substantially worse than optimally, then there is a simple update operation that improves the error on that group without increasing either overall error, or the error on any previously identified group. We do not restrict the complexity of the groups that can be identified, and they can intersect in arbitrary ways. The key insight that allows us to break through the tradeoff barrier is to dynamically expand the model class as new high error groups are identified. The result is provably fast convergence to a model that cannot be distinguished from the Bayes optimal predictor — at least by the party tasked with finding high error groups. We explore two instantiations of this framework: as a “bias bug bounty” design in which external auditors are invited (and monetarily incentivized) to discover groups on which our current model’s error is suboptimal, and as an algorithmic paradigm in which the discovery of groups on which the error is suboptimal is posed as an optimization problem. In the bias bounty case, when we say that a model cannot be distinguished from Bayes optimal, we mean by any participant in the bounty program. We provide both theoretical analysis and experimental validation. The Causal Basis for Testing Police Discrimination with Statistics 01:30PM - 04:15PM
Presented by :
Naftali Weinberger, Reviewer, Munich Consider a study indicating that police performing traffic stops in Pittsburgh search minority drivers at a higher rate than non-minority drivers. This result would be insufficient for establishing discrimination against minorities. This is because it is compatible, e.g., with the hypothesis that the police make stops based on observing suspicious activities, and that minority drivers disproportionately engage in such activities. For this reason, legal and empirical studies of discrimination often employ benchmark tests. Such tests involve statistically conditioning on covariates that differentiate the relevant groups in order to determine what the disparity between the group stop-rates would be in the absence of discrimination. As Neil and Winship (2018) note, benchmark tests are fatally undermined by Simpson’s Paradox (Sprenger and Weinberger, 2021). An example of the paradox would be a case in which police stopped minorities and non-minorities at the same rate in Pittsburgh as a whole, but stopped minorities at a higher rate within every single district. Accordingly, statistical claims involving comparisons of relative rates across populations – including the rates invoked in benchmark tests – will not be robust to conditioning on additional covariates. Unfortunately, Neil and Winship’s non-causal discussion of the paradox is woefully inadequate. Presenting a better understanding of its proper interpretation is important not only because the paradox is widely discussed in the empirical discrimination literature, but also because it illuminates the role of causal assumptions in interpreting statistics relevant to discrimination. The first general lesson I will draw from my discussion of the paradox concerns the sense in which discrimination statistics provide evidence for claims about police discrimination. One might be tempted by the position that discovering that police stop non-minorities and minorities at the same rates would count as evidence against discrimination, and that subsequently learning that minorities are stopped at a higher rate within every district would count as countervailing evidence. In contrast, I argue that the statistics being cited provide no evidence for or against discrimination, absent additional substantive assumptions about the variables being modeled. Since Simpson’s paradox reveals that comparisons of relative rates across populations are not robust to conditioning on additional variables, non-statistical assumptions are required to draw any conclusions about discrimination, even tentative ones. The second lesson I draw concerns an underappreciated role of causal assumptions in empirical modeling. Causal models are often advertised as licensing inferences concerning experimental interventions. Additionally, such models can provide a framework for differentiating meaningful from non-meaningful statistical relationships. Given that statistics alone cannot provide evidence for discrimination absent additional substantive assumptions, a further framework is required for representing such assumptions in a general way. I will argue that causal models provide precisely such a framework. On Interpreting Causal Effects of Race 01:30PM - 04:15PM
Presented by :
Daniel Malinsky, Columbia University
Liam Kofi Bright, Presenter, London School Of Economics We approach the debate over “causal effects of race” from a social constructionist perspective. Our first main thesis is that on a broad range of social constructionist views about race, an individual’s race is manipulable, i.e., it is conceptually coherent to posit counterfactuals about a person’s race without risking essentialism or debunked biological thinking. Our second main thesis is that causal effects of race are indirectly relevant to policy. Estimating the causal effects of race may be a starting point for inquiry into the plural mechanisms of racism, i.e., the various pathways by which racial disparities arise. This may inform policies which intervene on the mechanisms themselves. Race, Causation, and Underdetermination 01:30PM - 04:15PM
Presented by :
Kareem Khalifa, Co-Author, UCLA
Alexander Tolbert, Presenter, University Of Pennsylvania Race is frequently treated as an explanatory variable in causal models throughout the social sciences. Yet, there is lively disagreement about the causal status of race. This disagreement arises from three claims that jointly form a paradox: (1) all causes are manipulable; (2) race is a cause; and (3) race is not manipulable. Non-manipulationists resolve this paradox by rejecting (1). On this view, “manipulationism” is too narrow a conception of causation, so we should expand our repertoire of causal concepts such that race, despite being non-manipulable, is nevertheless causal. Causal skeptics about race resolve this paradox by rejecting (2). On this view, race is not a causal variable, in no small part because of its non-manipulability. Finally, manipulationists reject (3), holding that race is causal precisely because it is manipulable. In this paper, we offer a novel position called causal agnosticism about race. Like racial causal skeptics and manipulationists, we hold fast to the claim that all causes are manipulable (1). However, whereas skeptics insist that (2) is false and manipulationists insist that (3) is true, we claim that the social sciences underdetermine the extent to which races are causes or manipulable. We argue for our agnostic position by appeal to the literature on the modeling of causal macrovariables. A causal macro-variable summarizes an underlying finer-structure of a set of microvariables. (For example, a gas’ temperature is a macrovariable with respect to its constituent particles.) If the social sciences provide adequate evidence to accept that race is either a cause or manipulable, then race is either a well-defined macro-variable or there is a “strong signal” for race, where a “strong signal” is a variable distinct from race that nevertheless tracks closely with race. However, no such macro-variables or signals exist in the social scientific models that appeal to race. Furthermore, even if there were a strong signal for race, it does not follow that race is a cause. Consequently, the social sciences fail to provide adequate evidence for the claims that race is a cause and that race is manipulable, i.e., the social sciences underdetermine both (2) and (3). Throughout our discussion, we compare it to the alternatives canvassed above. We conclude by tracing out causal agnosticism’s policy implications. In particular, we argue that any policy intervention suggested by a non-agnostic position about race’s causal status can be reinterpreted in a manner compatible with agnosticism. We conclude from this that the ontological status of race is of marginal policy relevance. | ||
01:30PM - 04:15PM Duquesne | Du Châtelet as Philosopher of Physics Speakers
Qiu Lin, Duke University
Fatema Amijee, Assistant Professor Of Philosophy, University Of British Columbia
Katherine Brading, Professor Of Philosophy, Duke University
Andrew Janiak, Professor Of Philosophy, Duke University
Moderators
Evangelian Collings, University Of Pittsburgh In recent years, Du Châtelet's magnum opus, Foundations of Physics (1740 & 1742) has attracted increased attention among philosophers. In this treatise, Du Châtelet made significant contributions to the central foundational issues in philosophy of physics at the time, ranging from Newtonian gravitation, the appropriate role of the Principle of Sufficient Reason (PSR) in physical theorizing, to the nature of space, time, and motion. In this symposium, we aim to subject Du Châtelet's views on some of these classic issues to closer scrutiny and promote research in her philosophy of physics in the Foundations and beyond. The Principle of Sufficient Reason as a Principle of Reasoning in Du Châtelet 01:30PM - 04:15PM
Presented by :
Fatema Amijee, Assistant Professor Of Philosophy, University Of British Columbia Most commentators have assumed that while Émilie Du Châtelet’s Foundations of Physics (1740) is an important and original work that demonstrates her commitment to Leibnizian metaphysics and the Principle of Sufficient Reason (PSR), Du Châtelet herself does not have an original argument for either a commitment to the PSR or its truth. I argue against this widespread assumption by showing that implicit in the Foundations is an argument for a commitment to the PSR from the possibility of scientific reasoning. This argument takes as its starting point our commitment to scientific reasoning, and in particular to abductive reasoning in science. It then shows that the PSR is a presupposition of such reasoning. Thus, insofar as we are committed to abductive—and more generally, scientific—reasoning, we are also committed to the PSR. I show that this argument in Du Châtelet is both original and distinct from any argument for the PSR presented by Leibniz. I further argue that the argument provides a significant insight into the way in which Du Châtelet’s views about substance differ from Leibniz’s. Du Châtelet on absolute and relative motion 01:30PM - 04:15PM
Presented by :
Katherine Brading, Professor Of Philosophy, Duke University In this paper, we argue that Du Châtelet’s account of motion is an important contribution to the history of the absolute versus relative motion debate. The arguments we lay out have two main strands. First, we clarify Du Châtelet’s threefold taxonomy of motion, using Musschenbroek as a useful Newtonian foil and showing that the terminological affinity between the two is only apparent. Then, we assess Du Châtelet’s account in light of the conceptual, epistemological, and ontological challenges posed by Newton to any relational theory of motion. What we find is that, although Du Châtelet does not meet all the challenges to their full extent, her account of motion is adequate for the goal of the Principia: determining the true motions in our planetary system. The philosophical aftermath of Newton’s physics: The case of Émilie Du Châtelet’s Foundations of Physics 01:30PM - 04:15PM
Presented by :
Andrew Janiak, Professor Of Philosophy, Duke University Newton’s startling conclusion in Book III of the Principia that all bodies gravitate defied easy interpretation. Whereas the editor of the Principia’s second edition (1713), Roger Cotes, claimed that gravity is a primary quality, Newton himself was more cautious. He claimed only that all bodies gravitate without saying that gravity is a property of bodies, adding the caveat that he was not ipso facto contending that gravity is essential to matter. In this same period, when Samuel Clarke defended Newton’s view against Leibniz’s famous criticisms, he presented a third, deflationary approach by treating gravity instrumentally. But an instrumentalist approach to gravity seemed to conflict with Newton’s own proclamation in the Principia that “it is enough that gravity really exists.” The tensions amongst these disparate ideas were never resolved. To address these tensions, Émilie Du Châtelet published a work entitled Foundations of Physics (Paris, 1740–42). She argued that neither Cotes’s approach, nor Newton’s, were apt methods of expressing the conclusion of the new theory of gravity. The theory indicates empirically that gravity is a universal force, she argued, but did not support Cotes’s interpretation because there was insufficient evidence to show that gravity is a property of all bodies. Among other things, it was not yet clear whether gravity depends in some way upon a subtle medium like an ether, in which case it might not be a property of bodies. (Similarly, Locke’s famous claim that God may have superadded gravity to bodies is suspect because it presupposes that the theory indicates that gravity is a property.) However, Newton’s alternative approach, which avoids contending that gravity is a property of all bodies or essential to matter, even while proclaiming that it is a universal force, was equally unsatisfactory. For starters, Du Châtelet argues that one must first clarify what one means by the essence of matter, which Newton avoided, and then one must also track the various conceptions of essences in play at the time. The overarching goal was to show that Newton’s abstemious approach toward classic metaphysical issues, often trumpeted as a feature of his physics, conflicted with attempts to interpret his theory of gravity. This episode reflects Du Châtelet’s overarching methodological approach to the “foundations” of physics. Although we do not require metaphysical foundations for physics in the way (e.g.) that Descartes presented in his Principles, whereby the first two laws of nature are purportedly deduced from God’s property of immutability, we also cannot eschew metaphysical questions as Newton attempted to do. Instead, the profound conclusions of the new physics require a foray into classic metaphysical topics like the essence of matter if those conclusions are to be widely understood. Du Châtelet’s approach to these problems is distinctive and deserves scholarly attention today. Du Châtelet on Mechanical Explanation vs. Physical Explanation 01:30PM - 04:15PM
Presented by :
Qiu Lin, Duke University In her second edition of the Foundations of Physics, Du Châtelet advocates a three-fold distinction of explanation: the metaphysical, the mechanical, and the physical. While her use of metaphysical explanation (i.e., explaining via the Principle of Sufficient Reason) has received some attention in the literature, little has been written about the distinction she draws between mechanical and physical explanations, including their demand, scope, and use in physical theorizing. This paper aims to fill this void, arguing that making this distinction is a crucial piece of Du Châtelet’s scientific method. According to Du Châtelet, a mechanical explanation is one that ‘explains a phenomenon by the shape, size, situation, and so on, of parts’, whereas a physical explanation is one that ‘uses physical qualities to explain (such as elasticity) … without searching whether the mechanical cause of these qualities is known or not’ (Du Châtelet 1742, 181). I will analyze Du Châtelet’s views regarding (1) What counts as a good physical explanation, (2) Why a mechanical explanation is not necessary for answering most research questions in physics, and (3) Why a good physical explanation, instead, is sufficient for answering those questions. In so doing, I argue that Du Châtelet is proposing an independent criterion of what counts as a good explanation in physics: on the one hand, it frees physicists from the methodological constraint imposed by mechanical philosophy, which was still an influential school of thought at her time; on the other, it replaces this constraint with the requirements of attention to empirical evidence, for that alone determines which physical qualities are apt to serve as good explanans. | ||
01:30PM - 04:15PM Forbes | Reconceiving Realism Speakers
Hasok Chang, University Of Cambridge
Mazviita Chirimuuta, Reviewer, Edinburgh
Michela Massimi, The University Of Edinburgh
Peter Vickers, Speaker, Durham University, UK
Moderators
Dana Tulodziecki, Purdue University The debate on realism concerning science is one of the oldest and perennial topics in philosophy of science. Yet the debate has increasingly reached a stand-off with often diminished returns. In recent years there has been renewed attention to realism with an eye to re-assessing the nature of the commitment involved and associated assumptions. This symposium brings together the state of the art in this recent trend with an array of philosophical views that have been recently elaborated to address some of the shortcomings of traditional scientific realism: activist realism, perspectival realism, haptic realism and future-proof facts. Our aim is threefold: (1) to offer motivations for reconceiving realism in particular directions; (2) to highlight four different brands of reconceived realism in dialogue: what they share and where they part ways; and (3), most importantly, to spell out the rich rewards that this exercise of reconceiving realism brings along with it, in terms of how to think about truth, reality, pluralism and the history of science. Identifying Future-Proof Science 01:30PM - 04:15PM
Presented by :
Peter Vickers, Speaker, Durham University, UK In Identifying Future-Proof Science (OUP 2022) I argue that we can confidently identify many scientific claims that are future-proof: they will last forever (so long as science continues). Examples include the evolution of human beings from fish, the fact that the Milky Way is a spiral galaxy, and “oxygen atoms are heavier than hydrogen atoms”. Whilst claims about truth in science are usually associated with scientific realism, it is crucial to note that most anti-realists will also agree with such examples, whether on the grounds that they concern in-principle observables, on the grounds that we are rightly confident that there are no plausible unconceived alternatives, or on other grounds. But how should we go about identifying future-proof science? This appears to be a new question for philosophers of science, and not an unimportant one. It unites traditional ‘realists’ and ‘anti-realists’, usefully demonstrating a point of consensus amongst philosophers of science: we all agree that there are many established scientific facts, including facts about things that have never been observed. Even philosophers who stress that “history shows that scientific truths are perishable” (Oreskes 2019, Why Trust Science?) think that there are many scientific truths that are here to stay, such as ‘smoking causes cancer’ and human-caused climate change. Kyle Stanford, for example, believes in many ‘establish scientific facts’, including our knowledge of fossil origins (Stanford 2011). Thus I argue that philosophers should never have presented themselves as polarised on two sides of a ‘science and truth’ debate. The labels ‘realism’ and ‘antirealism’ are mostly unhelpful, and should be left behind. The interesting question concerns how we identify the scientific facts. It is argued that the best way to identify future-proof science is to avoid any attempt to analyse the relevant first-order scientific evidence (novel predictive success, unifying explanations, etc.), instead focusing purely on second-order evidence. Specifically, a scientific claim is future-proof when the relevant scientific community is large, international, and diverse, and at least 95% of that community would describe the claim as a ‘scientific fact’. In the entire history of science, no claim meeting these criteria has ever been overturned, despite enormous opportunity for that to happen (were it ever going to happen). There are important consequences for school education: If this is indeed the way to identify future-proof science, then the vast majority of school-leavers will have hardly any of the requisite skills, since schools systems around the world completely neglect to teach children how to judge the second-order evidence for scientific claims. Perspectival realism: historical naturalism and situated knowledge 01:30PM - 04:15PM
Presented by :
Michela Massimi, The University Of Edinburgh In this talk, I attend to three main tasks. First, I locate the main rationale for my perspectival realism in what I call historical naturalism (drawing on Massimi 2022, Ch 8). I argue that our realist commitments originate from a thoroughgoing naturalistic stance. However, by contrast with classical ways in which naturalism has been portrayed in the literature (starting from Quine 1968), I point out the need to enlarge naturalism to encompass our scientific history as a way of better understanding how we came to carve the world with the kinds we know and love. Our natural kinds, I argue, are the product of our scientific history that is redefining the very idea of what ‘naturalness’ means. My second task is to give one major highlight of perspectival realism: how to rethink the ontology of natural kinds in light of historical naturalism. Here I shall bring my Neurathian approach to natural kinds in dialogue with the approaches of my co-symposiasts by highlighting relevant affinities with Chang’s view of natural kinds born out of epistemic iterations, Chirimuuta’s haptic realism with the notion of ‘ideal patterns’ and Vickers’ future-proof facts and associated commitment to predicting novel phenomena. I defend an ontology of phenomena and explain how I see natural kinds as groupings of phenomena. This way of rethinking natural kinds has the advantage of avoiding the classic problems about reference discontinuity / conceptual change at one hand, and ‘eternal natural kinds’ at the other hand. My third and last task is to articulate the reasons why I see such a shift in realist commitments as crucial for delivering a pluralist and inclusive view of scientific knowledge production, where past theories and past achievements are not just either celebrated in the hagiography of the winners or throw in the dustbin of history. Instead, they are an intrinsic part of how we reliably came to know the world as being this way. A focus on historically and culturally situated scientific perspectives, combined with an inclusive notion of ‘epistemic communities’, allows one to reassess scientific knowledge production not as the repository of an elite community of scientists. Perspectival realism celebrates the social and collaborative nature of scientific knowledge and embeds a plurality of situated epistemic communities in the very fabric of scientific knowledge production. The Question of Realism is a Matter of Interpretation 01:30PM - 04:15PM
Presented by :
Mazviita Chirimuuta, Reviewer, Edinburgh The way that I seek to redirect the realism debate is away from the question of the reality of unobservable posits of scientific theories and models, and towards the question of whether those theories and models should be interpreted realistically. This makes it easier to include within the realism debate sciences of relatively large and observable items, as are many branches of biology. But it is not a simple trade of the ontological question of realism for a semantic one. My contribution will focus on computational neuroscience. In this discipline, models are normally interpreted as representing computations actually performed by parts of the brain. Semantically, this interpretation is literal and realistic. Ontologically, it supposes that the structure represented mathematically as a computation (i.e. a series of state transitions) there in the brain processes. I call this supposition of a structural similarity (homomorphism) between model and target, formal realism. This stands in contrast to an alternative way to interpret the model which I call haptic realism (Chirimuuta 2016). The view here is that whatever processes exist in the brain are vastly more complicated than the structures represented in the computational models, and that the aim of modelling is to achieve an acceptable simplification of those processes. Thus, the success of the research is more a matter of structuring than of discovering pre-existing structures. Ultimately, the realism debate is motivated by curiosity about what it is that the best scientific representations have to tell us about the world: is this thing really as presented in the model? Thus, I argue that the contrast between formal realism vs. haptic realism is a good template for framing the realism debate when discussing the implications of sciences of extremely complex macro and mesoscopic systems, such as the nervous system, and generalising to elsewhere in biology, including ecology, as well as the physical sciences of large complex systems such as climate and geological formations. Haptic realism does not suppose that the structures given in scientific models are fully constructed or mind-dependent, but that there is an ineliminable human component in all scientific representations, due to the fact that they can never depict the full complexity of their target systems and as such are the result of human decisions about how to simplify. The acceptability of certain simplifications (abstractions and idealisations) over others is due to a number of factors, including predictive accuracy, mathematical/computational tractability, and the envisaged technological applications of the model. Formal realism supposes that scientific representations are, at their best, a clear-view window onto mind-independent nature, whereas haptic realism maintains that this is an unrealistic way to describe the practices and achievements of science. Realism for Realistic People 01:30PM - 04:15PM
Presented by :
Hasok Chang, University Of Cambridge My re-conception of realism is based on new pragmatist notions of knowledge, truth and reality, which are elaborated in the forthcoming book Realism for Realistic People. These notions are designed for better understanding and facilitation of scientific and quotidian practices. I focus on “active knowledge,” which consists in knowing how to do things. Active knowledge both enables and utilizes propositional knowledge. The quality of active knowledge consists in the “operational coherence” of epistemic activities. Operational coherence is about designing our activities so that they make sense as plans for achieving our aims, and it is a notion deeply connected with the interpretive dimensions of Chirimuuta’s haptic realism. I re-conceive the very notions of reality and truth in terms of operational coherence, thereby rendering them as concepts operative in actual practices: roughly speaking, true propositions facilitate operationally coherent activities, which deal in real entities. Empirical truth is not a matter of correspondence to an inaccessible sort of mind-independent reality; the correspondence achieved in real practices is among accessible realities that are “mind-framed” yet not “mind-controlled.” My main interest in reconceiving realism is to turn it into an operational doctrine that we can actually put into practice, and in keeping with the best scientific practices. I take realism in and about science as “activist realism”: a commitment to do whatever we can in order to improve knowledge. And I take this in a realistic spirit, focusing on the search for what we can actually do in a process of continual learning. There are some implications of activist realism that would be contrary to the instincts of standard scientific realists, and I will highlight three of them in this presentation. (1) Following the imperative of progress inherent in activist realism naturally results in a plurality of systems of practice, each with its real entities and its true propositions. The link with Massimi’s perspectivism is evident, including the notion of natural kinds that she develops in this symposium. (2) Activist realism eliminates the unproductive opposition between realism and empiricism. The kind of naturalism advanced by Massimi also connects naturally with both realism (in my sense) and empiricism (in the usual sense). (3) The activist stance allows us to condemn those who work against empirical learning, while not claiming for ourselves supernatural access to “external reality.” The drive toward continual empirical learning may place activist realism into an interesting tension with Vickers’ preference for “future-proof” facts: should realists be eager to engage in new learning that may overturn the most secure-seeming facts of today? | ||
01:30PM - 04:15PM Sterlings 2 | The Philosophy of Science Journalism Speakers
Mikkel Gerken, University Of Southern Denmark
Vanessa Schipani, University Of Pennsylvania
Chris Haufe, Case Western Reserve University
Matthew Slater, Bucknell University
Moderators
Joanna Huxster, Moderator, Eckerd College Science journalism is an under-examined topic in our field. This is surprising, given the many points of connection between the presumptive goals of science journalism and topics of perennial interest in philosophy of science (discussed in detail in the long description below). The primary goal of the proposed symposium is to open discussion on these connections in order to promote philosophical research in this area. To this end, Mikkel Gerken, Vanessa Schipani, Chris Haufe and Matthew Slater will present on topics including conflict between the norms of accuracy and harm in science journalism and the practice of reporting science in a manner that appeals to the social values of the public. To ensure our philosophy relates honestly to journalistic practice, we've invited practicing journalists to participate in the symposium. Having journalists present also serves the symposium's secondary goal: To form a stronger partnership with journalists in their communication of science to the public. In addition to three presentations by philosophers, our program will include a panel and Q&A with the journalists on meaningful avenues of connection between philosophers and journalists. Can Reporting Accurate Scientific Information Harm the Public? 01:30PM - 04:15PM
Presented by :
Vanessa Schipani, University Of Pennsylvania Journalistic practice is guided by norms receive sparing attention from philosophers, especially in the context of science reporting. This presentation examines how a conflict between two norms manifests in science journalism due to the phenomenon of science denialism. As outlined by the Society of Professional Journalists’ (SPJ) Code of Ethics, one norm tells reporters to maximize the accuracy of their reporting. Another norm tells them to minimize the harm of their reporting. In important cases, I argue, science journalists can’t satisfy both norms simultaneously. I then investigate an option to resolve this conflict, which I argue ultimately fails. Inspired by the early days of the coronavirus pandemic, I illustrate this norm conflict using the example of reporting scientific disagreement on the efficacy of masks in preventing the spread of a deadly virus. As empirical research by Gustafson and Rice (2019) suggests, communicating scientific disagreement can lead to the public’s rejection of scientific findings and, consequently, the maintenance of status quo behaviors. This can cause harm when behavioral change is needed to prevent harm. I argue that if journalists report the science on masks in a maximally accurate way, then they would report that, while the most evidence suggests masks work, some evidence suggests they don’t work and meager evidence suggests they could promote infection. However, reporting this scientific disagreement may cause the public harm because it could lead some to deny the science, not wear masks and increase their chances of catching and spreading the virus to others. Alternatively, journalists could merely report the leading hypothesis that masks work, thereby avoiding communicating scientific disagreement and causing the harm outlined above. But then they wouldn’t be reporting the science in a maximally accurate manner because they would be implying the science entails more consensus than it really does. One might argue that the accuracy norm should take precedence over the harm norm in such cases of conflict because “there is nothing more important” than this norm to journalism, as Fred Brown of the SPJ’s Ethics Committee notes. However, this resolution misses the point of the harm norm, I argue: As the SPJ’s Code notes, the harm norm guides journalists to “balance the public’s need for information against potential harm.” Thus, there are situations in which journalists – non-science journalists in particular – decide to sacrifice some accuracy to prevent harm. An example of this is reporting on suicide: Journalists intentionally leave the details of suicides vague (and, thus, don’t maximize accuracy), because research shows that providing details can lead to copycats. However, there are no cases, to my knowledge, in which science journalists have sacrificed some accuracy in their reporting specifically to prevent harm to the public. Ultimately, the goal of this presentation is to raise and begin to address the following questions: Why shouldn’t the harm norm apply in the pandemic case I outline above if it does apply in the suicide case? Or more fundamentally, why should (or shouldn’t) communicating scientific information be exempt from the harm norm? Scientific Values and Value-Based Science Reporting 01:30PM - 04:15PM
Presented by :
Mikkel Gerken, University Of Southern Denmark I will critically evaluate a science communication strategy – ‘Value-Based Reporting’ – which researchers in science communication are increasingly recommending to science journalists. According to Value-Based Reporting, science reporters should, whenever feasible, report a scientific hypothesis in a manner that appeals to the social values of the intended recipients (Dixon et al. 2017; Kahan et al. 2011). The strategy is motivated by empirical research which suggests that identity-protective reasoning is a central reason for laypersons’ selective skepticism of science communication regarding politically polarizing issues such as climate, vaccines, gun control etc. (; Kahen 2013; Nisbet et al. 2015; Frimer et al. 2017; Science journalists may implement the generic Value-Based Reporting strategy in different ways. One strand of the strategy is labeled identity affirmation and consists in showing the target recipient group “that the information in fact supports or is consistent with a conclusion that affirms their cultural values” (Kahan et al. 2011: 169). A different strand is labeled narrative framing and consists in “crafting messages to evoke narrative templates that are culturally congenial to target audiences” (Kahan et al. 2011: 170). I argue that while the empirical reasons for adopting Value-Based Reporting are strong ones, this strategy faces serious challenges in delivering on a number of desiderata for science communication. Given that science communication is a part of the scientific enterprise, broadly construed, these desiderata reflect core scientific values. In consequence, Value-Based Reporting is in tension with core scientific values. On the basis of the negative sub-conclusion, I consider an alternative positive science communication strategy – Justification Reporting – according to which science reporters should, whenever feasible, report appropriate aspects of the nature and strength of scientific justification, or lack thereof, for a reported scientific hypothesis (Gerken 2020). I conclude by arguing that although Value-Based Reporting and Justification Reporting may initially appear to be incompatible competitors, there are interesting ways of integrating them. In particular, I argue that such an integration may preserve the key advantages of Value-Based Reporting in a manner that addresses some of the noted challenges. In this manner, the paper exemplifies how resources from philosophy of science may be brought to bear on concrete challenges for contemporary science journalism. Dixon, G., Hmielowski, J., & Ma, Y. (2017). Improving climate change acceptance among US conservatives through value-based message targeting. Science Communication, 39 (4): 520-534. Frimer, J. A., Skitka, L. J., & Motyl, M. (2017). Liberals and conservatives are similarly motivated to avoid exposure to one another’s opinions. Journal of Experimental Social Psychology, 72: 1-12. Gerken, M. (2020). Public scientific testimony in the scientific image. Studies in History and Philosophy of Science Part A, 80, 90-101. Kahan D., Jenkins-Smith H, Braman D. (2011). Cultural cognition of scien¬tific consensus. Journal of Risk Research 14: 147–174. doi:10.1080/136 69877.2010.511246 Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8 (4): 407–424. Nisbet, E. C., Cooper, K. E., & Garrett, R. K. (2015). The partisan brain: How dissonant science messages lead conservatives and liberals to (dis) trust science. The ANNALS of the American Academy of Political and Social Science, 658 (1): 36-66. Philosophical Challenges Facing Science Journalists 01:30PM - 04:15PM
Presented by :
Chris Haufe, Case Western Reserve University
Matthew Slater, Bucknell University A central concern of good reporting is to try to convey a sense of the range of opinions on an issue. This is part of the way in which a free press is understood to fulfill its essential role in functioning democracies. By presenting the electorate with an explanatory survey of plausible positions on matters of social concern, the press (in principle) provides voters with the information they need to make informed decisions regarding which of the competing positions seems, upon tutored reflection, to be worthy of their support. The problem is that this principle is not a great fit for reporting on science. One of the reasons is that disagreement among scientists has very different effects on lay consumers of scientific journalism than it does on members of the scientific community. The latter have a cultivated sensibility which allows them to distinguish between relevant and irrelevant disagreement. But outside the community, diversity of opinion is routinely perceived as a signal of ignorance. And who can blame them, given how much we have emphasized the central role that consensus plays in making scientific knowledge reliable? On the other hand, it is not clear what informational demands on democratic participation are satisfied by reporting on settled scientific ground. Now, granted, this is not a test that every inch of newsprint needs to pass. The box scores from last night’s game have no bearing on my ability to make decisions about what side of a social issue to support. But scientific knowledge and the communities that generate it permeate many facets of modern life. Moreover, the production of scientific knowledge is largely financed by public funding. We, through our elected representatives and the administrative functionaries they appoint, are significantly impacting decisions about which lines of inquiry receive funding and which will not be pursued. There is no superconducting supercollider in Texas. That is a consequence of public outrage over the cost. We are in demonstrable need, then, of accessible information that will enable us to perform our civic duties with respect to science. The challenge facing science journalists is that no one knows what sort of information that is. Lastly, we look at the unique pressures to which journalists are subject qua producers of news stories that necessarily have a distorting effect on the public conception of the scientific process. In particular, the narrative form that stories inevitably take when they are not reporting on standing controversies gives the mistaken impression that science involves a tidy linear march from fascinating observation to widely accepted explanation. This becomes a problem when, as with COVID-19, the scientific process is on full display and observed to be in a perpetual state of flux and confusion. Science is in a perpetual state of flux and confusion. That’s part of what makes it interesting to practitioners. But it is not clear how to convey that in the form of a journalistic report. Indeed, it is not even clear to what extent journalists are aware of this. | ||
01:30PM - 04:15PM Board Room | Aetiology of the Replication Crisis Speakers
Edouard Machery, University Of Pittsburgh
Sophia Crüwell, University Of Cambridge
Uljana Feest, Leibniz Universität Hannover
Felipe Romero, University Of Groningen
Moderators
Samuel Fletcher, University Of Minnesota The replication crisis describes an ongoing phenomenon, particularly in the social and medical sciences, in which there is a high frequency of unsuccessful replications which has been a cause of deep concern in the fields in question. But how did we get here? What exactly is at issue in this "crisis"? Our symposium is broadly concerned with providing insights into the aetiology of the replication crisis, particularly in psychology. We look at this topic from historical, philosophical and metascientific perspectives: three talks focus on specific candidate explanations of the replication crisis or low replicability, and one talk examines the emerging field of metascience. We hope that our symposium provides interesting insights into the replication crisis and its aetiology as a topic at the cutting edge of contemporary science and philosophy of science, as well as providing a platform for further discussion. The Psychologist’s Green Thumb 01:30PM - 04:15PM
Presented by :
Sophia Crüwell, University Of Cambridge The ‘psychologist’s green thumb’ stands for the assertion that an experimenter needs an indeterminate set of subtle skills or “intuitive flair” (Baumeister, 2016) in order to be able to successfully show or replicate an effect. This argument is sometimes brought forward by authors whose work has failed to be replicated on independent replication attempts, to explain a lack of replicability. On the one hand, this argument presenting replication failure as a failure on the part of the replicator seems ad-hoc. The ‘failed’ replications are typically more highly powered, more transparently carried out, and better described than the corresponding original studies. And yet, the original authors argue that the problem lies not at all with the study or the effect but with the replicator’s skill. References to flair and a lack of experimenter skill as explaining replication failures have consistently been quickly rejected by meta-researchers and others connected to the reform movement. On the other hand, there are conditions under which the psychologist’s green thumb argument will be potentially compelling, as the generation of some scientific evidence does require something like a ‘green thumb’ (e.g., Kuhn, 1962). Furthermore, it is not clear how we can distinguish between a replication failure that is due to the absence of the effect and one due to lack of skill without knowing whether the replicator is skilled or whether there is an effect (Collins, 1992). The original author, having previously ‘found’ the effect, may claim to have skills the replicator lacks and thus be able to make this distinction. Moreover, failed replications may result in the explication of hidden auxiliary hypotheses representing tacit, ‘green thumb’ knowledge or skill, leading to productive advances through “operational analysis” (Feest, 2016). Therefore, the idea that one needs a certain skill set to be a ‘successful’ experimenter may be convincing and less ad-hoc. In this talk, I will argue that initial biased reasoning towards a desired result is often a more likely cause of low replicability, even in contexts where appeals to 'green thumb' tacit knowledge arguments are conceptually persuasive. I will begin by investigating the conditions under which the psychologist’s green thumb is a persuasive concept. I will come to the preliminary conclusion that if experimenter skill takes the form of tacit knowledge that is not or seemingly cannot be shared, then a replicator may appear to lack the psychologist’s green thumb. However, it is unclear whether alleged ‘green thumb’ tacit knowledge amounts to A) experimenter skill to find evidence of a true effect, or B) biased reasoning towards a desired result. Given metascientific evidence regarding publication bias and the widespread use of questionable research practices, B) is likely a better explanation for many replication failures than the psychologist’s green thumb. In the context of field-wide replication failures, ‘green thumb’ tacit knowledge is a red herring at best – what is really at stake here is the articulation of background assumptions. We should strive towards experimental processes that can be and are sufficiently described for reproducibility and in principle replicability. References Baumeister, R. F. (2016). Charting the future of social psychology on stormy seas: Winners, losers, and recommendations. Journal of Experimental Social Psychology, 66, 153-158. Collins, H. M. (1992). Changing order: Replication and induction in scientific practice. University of Chicago Press. Feest, U. (2016). The experimenters' regress reconsidered: Replication, tacit knowledge, and the dynamics of knowledge generation. Studies in History and Philosophy of Science Part A, 58, 34-45. Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press. The Conceptual Origins of Metascience: Fashion, Revolution, or Spin-off? 01:30PM - 04:15PM
Presented by :
Felipe Romero, University Of Groningen Ten years into the replication crisis, many scientists are experiencing a deep sense of worry and scepticism. In reaction to this problem, an optimistic wave of researchers has taken the lead, turning their scientific eyes onto science itself, with the aim of making science better. These metascientists have made progress studying causes of the crisis and proposing solutions. They have identified questionable research practices and bad statistics as potential culprits (Simmons et al., 2011, John et al., 2012). They have defended statistical (Cumming, 2012; Lee and Wagenmakers, 2013) and publication reforms (Chambers, 2013; Vazire, 2015) as solutions. Lastly, they are designing technological tools (benefiting from developments in related fields such as data science, machine learning, and complexity science) to support such reforms. The term metascience precedes the replication crisis. However, only now metascience is becoming institutionalised: there is an increasing community of practitioners, societies, conferences, and research centres. This institutionalisation and its perils require philosophical attention. It is worth stepping back and asking foundational questions about it. How did metascience emerge? Where does the novelty of metascience lie? How does metascience relate to other fields that take science as their subject matter? This talk focuses on the conceptual origins of metascience. I explore three different models of discipline creation and change, and seek to understand whether they can make sense of the emergence of metascience. (1) First, on the sociological model, the emergence of metascience does not obey merely epistemic needs, and can also be explained as a fashion (e.g., Crane, 1969). (2) By contrast, on the Kunhian model (1970), metascience can be viewed as a scientific revolution (a term that metascientists sometimes use) that is necessary to move beyond a period of crisis. (3) Finally, on the spin-off model, similarly to how physics branched out from natural philosophy, metascience could become the natural successor of disciplines such as history and philosophy of science. After examining these models, I suggest that we should challenge the increasingly popular perception of metascience as a fully authoritative field, in particular, when it comes to understanding the causes of the replication crisis and finding its solutions. References Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49, 609–610. Crane, D. (1969). Fashion in Science: Does It Exist? Social Problems, 16(4), 433–441. Cumming, G. (2012). Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-analysis. Multivariate applications book series. Routledge. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524–532 Kuhn, Thomas S (1970). The Structure of Scientific Revolutions. Chicago: University of Chicago Press. Lee, M. D. & Wagenmakers, E-J. (2013). Bayesian Cognitive Modeling: A Practical Course. Cambridge: Cambridge University Press. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. doi:10.1177/0956797611417632 Vazire, S. (2015). Editorial. Social Psychological & Personality Science,7, 3–7. What is the Replication Crisis a Crisis Of? 01:30PM - 04:15PM
Presented by :
Uljana Feest, Leibniz Universität Hannover While many by now acknowledge that wide-spread replication failures are indicative of a crisis in psychology, there is less agreement about questions such as (a) what this “replication crisis” is a crisis of, precisely (i.e., whether it is really, at heart, a crisis of replication) and (b) what socio-historical factors have contributed (and continue to contribute) to its existence. One standard answer in the literature is that replication failures are often due to questionable research practices in the original studies (p-hacking, retroactive hypothesis-fitting, etc.) (Simmons et al 2011), in turn giving rise to hypotheses about the institutional structures (e.g., incentive structures) that may be responsible for such practices. More recently, others have argued that the narrow focus on (the replicability of) experimental effects is itself part of a larger problem, namely a relative sparsity of sustained theoretical work in psychology. In turn, this has given rise to some efforts to develop methodologies of theory-construction (e.g., fried, 2020; van Roji & Baggio 2021). Both of these discussions make valuable contributions to a fuller understanding of the crisis. However, in my talk I will argue that there is a missing link here, having to do with questions about the very subject matter of psychology. What is missing in both types of analyses (i.e., those that focus on flaws in statistical and theoretical procedures) is a discussion of what (kinds of things) can be objects of psychological research, such that (1) we can generate (and perhaps even replicate) experimental effects pertaining to them, and (2) we can try to construct theories about them. In making psychological objects the focal point of my analysis, I follow a recent suggestion by Jill Morawski (2021), who notes that different responses to the replication-crisis reveal different underlying notions of the objects under investigation. Thus, she argues that “some researchers assume objects to be stable and singular while others posit them to be dynamic and complex” (Morawski 2021, 1). After clarifying my understanding of the psychological subject matter, I will come down in favor of an understanding of psychological objects as complex and dynamic, i.e., as multi-track capacities of individuals, which can be moderated by a large number of factors, both person-specific and environmental. With this in mind, we should expect experimental effects to be sensitive to small changes in experimental settings and, thus, be hard to replicate. My point is not that we should throw up our hands in the face of the inevitability of replication failures but rather that we need to recognize that the context-sensitivity of psychological objects is itself worthy of experimental study and that replication failures can provide valuable insights in this regard (see also Feest in press). In making this point, I am pushing for a revival of more “ecological” approaches to psychology (as was present, for example, in early 20th-century functionalism). In this vein, I will trace the current crisis, in part, to (i) a lack of attention to psychological objects in general and (ii) to a failure to appreciate the complexity and embeddedness of psychological objects. With regard to etiology, this analysis suggests the following two questions, i.e., first, why did parts of psychology get so fixated on effects as their objects, and second, why did parts of psychology get so fixated on cognitive systems in isolation from their environments? I will provide sketches of some historical answers to these questions. References Feest, Uljana (2022), Data Quality, Experimental Artifacts, and the Reactivity of the Psychological Subject Matter. European Journal for the Philosophy of Science (in press). Fried, Eiko I. (2020, February 7), Lack of theory building and testing impedes progress in the factor and network literature. https://doi.org/10.31234/osf.io/zg84s Morawski, Jill (2021), How to True Psychology’s Objects. Review of General Psychology. https://doi.org/10.1177/10892680211046518 Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366. van Rooij, Iris & Baggio, G. (2021). Theory before the test: How to build high-verisimilitude explanatory theories in psychological science. Perspectives on Psychological Science. https://journals.sagepub.com/doi/full/10.1177/1745691620970604 What Do We Learn From Formal Models Of Bad Science? 01:30PM - 04:15PM
Presented by :
Edouard Machery, University Of Pittsburgh The poor replicability of scientific results in psychology, the biomedical sciences, and other sciences is often explained by appealing to scientists’ incentives for productivity and impact: Scientific practices such as publication bias and p-hacking (which are often called “questionable research practices”) enable scientists to increase their productivity and impact at the cost of the replicability of scientific results. This influential and widely accepted explanatory hypothesis, which I call “the perverse-incentives hypothesis,” is attractive, in part because it embodies a familiar explanatory schema, used by philosophers and economists to explain many characteristics of science as well as, more broadly, the characteristics of many other social entities. The perverse-incentives hypothesis has given rise to intriguing and sometimes influential models in philosophy (in particular, Heesen, 2018, in press) and in metascience (in particular, Higginson & Munafò, 2016; Smaldino & McElreath, 2016; Grimes et al., 2018; and Tiokhin et al., 2021). In previous work, I have examined the empirical evidence for the perverse-incentives hypothesis, and concluded it was weak. In this presentation, my goal is to examine the formal models inspired by the perverse-incentives hypothesis critically. I will argue that they provide little information about the distal causes of the low replicability of psychology and other scientific disciplines, and that they fail to make a compelling case that low replicability is due to scientific incentives and the reward structure of science. Current models suffer from one of the three flaws (I will also argue that (1) to (3) are indeed modeling flaws): (1) They are empirically implausible, building on empirically dubious assumptions. (2) They are transparent: The results are transparently baked into the formal set-up. (3) They are ad hoc and lack robustness. Together with the review of the empirical literature on incentives and replicability, this discussion suggests that incentives only play a partial role in the low replicability of some sciences. We should thus look for complementary, and possibly alternative, factors. References Grimes, D. R., Bauch, C. T., & Ioannidis, J. P. (2018). Modelling science trustworthiness under publish or perish pressure. Royal Society Open Science, 5(1), 171511. Heesen, R. (2018). Why the reward structure of science makes reproducibility problems inevitable. The Journal of Philosophy, 115(12), 661-674. Heesen, R. (in press). Cumulative advantage and the incentive to commit fraud in science. The British Journal for the Philosophy of Science. Higginson, A. D., and Munafò, M. R. (2016). Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biology, 14(11), e2000995. Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society open science, 3(9), 160384. Tiokhin, L., Yan, M., & Morgan, T. J. (2021). Competition for priority harms the reliability of science, but reforms can help. Nature human behaviour, 1-11. | ||
01:30PM - 04:15PM Fort Pitt | Constraints and Scientific Explanation Speakers
Marc Lange, Theda Perdue Distinguished Professor, University Of North Carolina At Chapel Hill
Dani Bassett, J. Peter Skirkanich Professor , University Of Pennsylvania
Daniel Kostic, Radboud Excellence Initiative Fellow, Radboud University
Lauren Ross, Reviewer, UC Irvine
Moderators
Benjamin Genta, University Of California, Irvine Short descriptive summary. This symposium proposal concerns constraints and their role in scientific explanation in an interdisciplinary setting. Constraints are often viewed as a unique explanatory factor that provides a distinct type of explanation (Lange 2013; Green and Jones 2016). As their name suggests, constraints are often viewed as factors that constrain, limit, or guide the behavior of some system, often explaining why various outcomes are impossible (or off-limits) (Lange 2017; Hooker 2012). Growing interest in this topic has raised a number of central questions in this area. These include: What exactly are constraints and how are they studied? How do they figure in explanations in physics and the life sciences? How do they differ from standard explanatory factors and what heuristic or pragmatic roles do they play? These questions are examined in four separate talks provided by one scientist (Dani Bassett) and three philosophers of science (Marc Lange, Daniel Kostic, and Lauren Ross). By exploring the nature of constraints, these talks contribute to existing literatures on diverse types of explanation, accounts of non-causal explanation, and the distinctive features of constraint-based reasoning. This symposium aims to break new ground in these areas by exploring constraints in a broader set of scientific fields than those examined in current work. Constraints and Explanations by Constraint in the Human Sciences 01:30PM - 04:15PM
Presented by :
Marc Lange, Theda Perdue Distinguished Professor, University Of North Carolina At Chapel Hill Several philosophers have argued that “constraints” constrain (and thereby explain) by virtue of being modally stronger than ordinary laws of nature. In this way, a constraint applies to all possible systems in a broader (i.e., more inclusive) sense of “possible” than the sense in play when we say that the ordinary laws of nature apply to all physically possible systems. Explanations by constraint are thus more general, more broadly unifying, than ordinary causal explanations. Putative examples of constraints are often drawn from physics. The great conservation laws (of energy, mass, momentum, etc.) are posited as being constraints because they are modally stronger than the various particular force laws they govern. This greater modal strength is reflected in the truth of various counterfactual conditionals according to which the conservation laws would still have held even if there had been different (or additional) forces. The conservation laws thereby explain why there are no perpetual motion machines, for instance. When we look at what explains the conservation laws, we find further constraints, such as the symmetry principles that (in a Hamiltonian dynamical framework) entail and are entailed by the conservation laws. As constraints (i.e., as meta-laws), the symmetry principles are modally stronger than the first-order laws, and their greater modal strength is again manifested in the truth of various counterfactual conditionals. For instance, the first-order laws would still have been symmetric under temporal translation even if there had been additional kinds of forces. All of these examples are drawn from physics. This raises the question of whether constraints, meta-laws, non-causal explanations by constraint, and so forth are plausibly present in the social sciences as well. I will argue that they are. I will look at some potential examples from linguistics and other human sciences and see whether they are analogous to the examples that I have just mentioned from physics. On this view, there are no languages of certain sorts because no such language is possible–in a broader sense of “possible” than a causal explanation could underwrite. Structural Network Constraints Upon Neural Dynamics in the Human Brain 01:30PM - 04:15PM
Presented by :
Dani Bassett, J. Peter Skirkanich Professor , University Of Pennsylvania The function of many biological systems is made possible by a network along which items of interest whether nutrients, goods, or information–can be routed. The human brain is a notable example. It is comprised of regions that perform specific functions and engage in particular computations. Those regions are interconnected by large white matter tracts. Each tract is a bundle of neuronal axons along which information-bearing electrical signals can propagate. Collectively, the tracts evince a pattern of connectivity–or network–that constrains the passage of information. In turn, that pattern of information flow determines the sorts of functions that the brain can support. Understanding structural network constraints is hence key to understanding healthy human brain function and its alteration in disease. Recent efforts have expanded the investigation of structural constraints in several ways. First, non-invasive measurements of white matter tracts using diffusion-weighted magnetic resonance imaging techniques have become increasingly sensitive to microstructural integrity and provided estimates of tract locations at finer spatial resolutions. These gains are made possible by an increase in the scan time (from 10 minutes to 1 hour), and in the number of diffusion directions acquired (from 30 to 720). Second, data-informed computational models have been developed to quantitatively assess how the particular network architecture these tracts comprise affects the brain’s dynamical repertoire. One such model that has proven particularly promising is the network control model, which draws upon and extends theoretical work in systems engineering. Third, a conceptual shift has expanded the types of explanations we use for cognitive processes from activity-based to structurally-based. For example, the hallmark of adult mental function–cognitive control–is now being studied not only as a regional activation state or computation but also as a dynamical process constrained by the structural network connecting the regions involved. Collectively, these measurement, modeling, and conceptual expansions are providing a richer understanding of structural constraints on human brain function. To better highlight the importance of structural network constraints upon neural dynamics, I will focus on a simple example. Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. Here, I will cast cognitive effort in the framework of network control theory, which describes how much energy is required to move the brain from one activity state to another when that activity is constrained to pass along physical pathways in a network. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain’s non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process structurally constrained by an underlying network. On the Role of Erotetic Constraints in Non-causal Explanations 01:30PM - 04:15PM
Presented by :
Daniel Kostic, Radboud Excellence Initiative Fellow, Radboud University Lange (2017) has done groundbreaking work on the explanatory role of constraints. However, besides having an explanatory role, some constraints, such as perspectival ones, can also have a pragmatic role in explanation. In this talk, I develop an account of perspectival constraints based on erotetic reasoning. Erotetic reasoning relies on the inferential patterns which determine both the questions and the space of possible answers to them. According to this view, questions can be conclusions in arguments that show how a question arises from certain contexts (Hintikka 1981; Winiewski 1996). For example, we can start from a set of propositions and derive questions based on the syntax and semantics of those statements: (1) If the city of Königsberg has a layout of landmasses and bridges such that they form a connected graph with a topological property p, then Königsberg cannot be traversed by crossing each bridge exactly once (an Eulerian path is impossible). (2) Königsberg has a layout of landmasses and bridges with a topological property p’. (3) Is an Eulerian path possible in the city of Königsberg? In this example, the erotetic argument starts with a statement about what it is for an arrangement to have a certain topological property p. From this, a relevant counterfactual that grounds an explanation could be derived: Had Königsberg’s layout had a topological property p, an Eulerian path in the city would have been possible. The inferential pattern in this toy example makes it intelligible why appealing to topological properties counts as an explanation of why the Eulerian path is impossible, but also why appealing to actual walking through the city does not (Lange 2018; Kostic and Khalifa 2021). To show how this analysis can be generalized from toy examples to actual explanations in science, I discuss an account of topological explanation (Kostic 2020) which outlines perspectival constraints for using the counterfactual information in two explanatory modes, i.e., a horizontal or a vertical explanatory mode. In the horizontal mode counterfactual dependency holds between properties at the same level, whereas in the vertical mode it holds between properties at different levels. The horizontal and vertical modes emerge from different question-asking contexts, thus by using erotetic reasoning I show how perspectival constraints enhance intelligibility of explanation, rather than relativizing it. When Causes Constrain and Explain 01:30PM - 04:15PM
Presented by :
Lauren Ross, Reviewer, UC Irvine Recent philosophical work on explanation explores the notion of “constraints” and the role they play in scientific explanation. An influential account of “explanation by constraint” is provided by Lange (2017), who considers these topics in the context of the physical sciences. Lange’s account contains two main features. First, he suggests that constraints explain by exhibiting a strong form of necessity that makes the explanatory target inevitable. Second, he claims that constraints provide a type of non-causal explanation, because they necessitate their outcomes in a way that is stronger than standard causal laws. This non-causal claim is supported by other work in the field (Green and Jones 2016) and it has helped advance accounts of non-causal explanation. While Lange’s work focuses on constraints in physics, this talk explores constraints in a broader set of scientific fields, namely, biology, neuroscience, and the social sciences. In these domains, scientists discuss developmental, anatomical, and structural constraints, respectively. I argue that these examples capture a type of causal constraint, which figures in a common type of causal explanation in science. I provide an analysis of (1) what it means for an explanatory factor to quality as a constraint and (2) how we know whether such factors are causal or not. Although Lange’s account does not include causal constraints, I clarify how this work is motivated by and shares similarities to his account. In particular, this work suggests that causal constraints explain the restriction of an outcome, exhibit a strong form of explanatory influence, and figure in impossibility explanations. Work on explanatory constraints contributes to the philosophical literature in a variety of ways. First, this work helps shed light on the diverse types of explanatory patterns that we find in science. This provides a more realistic picture of scientific explanation and the methods, strategies, 5 and reasoning it involves. Second, appreciating that some explanatory constraints are causal has implications for attributing causal responsibility to parts of a system and for suggesting potential interventions that allow for control. In the context of the social sciences, for example, this has implications for holding social structural factors accountable for outcomes and for suggesting policy-level interventions that bring about desired change. | ||
01:30PM - 04:15PM Sterlings 3 | Generation and Exploration in Data Science Speakers
Sarita Rosenstock, Postdoctoral Research Fellow, Australian National University
Ignacio Ojea Quintana, Australian National University
Colin Klein, Australian National University
Robert Williamson, University Of Tübingen
Atoosa Kasirzadeh, University Of Edinburgh
Moderators
David Hilbert, University Of Illinois At Chicago Abstract: Many scientific fields now benefit from 'Big Data.' Yet along with large datasets come an abundance of computational and statistical techniques to analyze them. Many of these techniques have not been subject to sustained philosophical scrutiny. This is in part because the scant literature on philosophy of data science often focuses on hypothesis confirmation as the primary end of data analysis. Yet there are many scientific contexts in which generation-of hypotheses, of categories, of methods-is at least as important an aim. This symposium will contribute to debates about realism, natural kinds, exploratory data analysis, and the value-ladenness of science through the lens of philosophy of data science, opening critical discussion about the nature of data and the emerging methods and practices used to foster scientific knowledge. Data, Capta and Constructa: Exploring, confirming, or manufacturing data 01:30PM - 04:15PM
Presented by :
Atoosa Kasirzadeh, University Of Edinburgh
Robert Williamson, University Of Tübingen The technology of Machine Learning (ML), arguably, is one of the most significant general purpose technologies of our age. The appealing promise of machine learning is that it can take a given large corpus of “raw” data packaged up into a “dataset”, learn and discover various patterns, and derive putatively objective and reliable conclusions about the world according to this data-based learning. This technology, and its implicit mindset, is becoming increasingly the subject of attention in the sciences [Hey 2020], which in turn suggests value in viewing this mindset in terms of philosophy of science. Although there is now work that argues that data is indeed not given (as the etymology of “data” suggest) but should be viewed as capta (i.e. taken by deliberate choice) [Kitchin 2014], we go further and argue that we should view it as constructa something constructed as part of the entire rhetorical chain of building reliable knowledge. We start by arguing that the ML-centric conception of raw data-set as a given is a large part of the problem in using ML in scientific endeavors. We build on the philosophical literature on values in science [Douglas 2000, 2016; Longino 1996] to show that the two essential and ever-present aspects of the context and values are intrinsic to the manufacturing of data-sets. Ironically, by obscuring and ignoring these aspects, the very ostensible goal of using data-driven inferences for rational, reliable, and sound knowledge and action in the world is thwarted. We argue that rather than conceiving the goal of data-science reasoning to just provide warrants for the calculations made upon the given data (construed as an accurate representation of the world), we are better served if we conceive of the entire process, including the acquisition (construction) of the data itself, and seek to legitimate that entire process. Our analysis allows us to identify and attack three pervasive, but flawed, assumptions that underpin the default conception of data in ML: (1) data is a thing, not a process; (2) data is raw and aimlessly given; (3) data is reliable. These assumptions not only result in epistemic harms, but more consequentially can lead to social and moral harms as well. We argue that value-ladenness and theory-ladenness coincide for data, and are readily understood by conceiving of data-based claims as rhetorical claims, where “facts” and “values” are equated in the sense that they are taken as incontrovertible assumptions. Then the justification for data-based inferences amounts to a rhetorical warrant for the whole process. Hence instead of presuming your data was given (or taken) and your job (as a scientist) is to explore it or confirm hypotheses tied to it, it is better to construe data as constructa - something manufactured, just like scientific knowledge, the solidity and reliability of which is within the control of the scientist, rather than being an intrinsic property of the world. Exploratory analysis and the expected value of experimentation 01:30PM - 04:15PM
Presented by :
Colin Klein, Australian National University Astonishingly large datasets are now relatively easy to come by in many scientific fields. The availability of open datasets means that it is possible to acquire data on a problem without formulating any hypothesis whatsoever. The idea of an exploratory data analysis (EDA) predates this situation, but many researchers find themselves appealing to EDA as an explanation of what they are doing with these new resources. Yet there has been relatively little explicit work on what EDA is or why it might be important. I canvass several positions in the literature, find them wanting, and suggest an alternative: exploratory data analysis, when done well, shows the expected value of experimentation for a particular hypothesis. There are three main positions on EDA in the literature. The first identifies EDA with a set of techniques that can be applied to data in order to suggest hypotheses. Tukey (1969, 1977, 1993), who emphasized the “procedure-oriented” nature of exploratory analysis and the extent to which these techniques were “things that can be tried, rather than things that ‘must’ be done” (1993, 7). Hartwig and Dearing (2011, 10) similarly speak of EA as a “state of mind” or a “certain perspective” that one brings to the data. Yet this does not suggest any sort of success conditions for EDA—either in particular cases or for new techniques in general—and therefore offers little guidance on EDA as such. Second, EDA is sometimes treated as simply confirmatory data analysis done sloppily, with looser parameters and more freedom. Authors who suggest this view do so primarily to denigrate EDA (Wagenmakers et al., 2012). This is too pessimistic: charity demands that we prefer a model where authors who appeal to EDA are not simply covering up their sins as researchers. Third, EDA is sometimes linked to socalled exploratory experiments (Steinle, 1997; Franklin, 2005; Feest and Steinle, 2016). Exploratory experimentation is no doubt important, and the techniques of EDA can shed light on particular kinds of exploratory experimentation. Yet EDA also finds use in mature fields where phenomena have been stabilized and the basic theoretical menu is complete, suggesting EDA is related to but distinct from exploratory experimentation. I suggest instead that EDA is primarily concerned with finding hypotheses that would be easy to confirm or disconfirm if a proper experiment were to be done. The techniques associated with EDA are geared towards showing unexpected or striking effects. Whether these effects actually hold cannot be determined from the dataset: EDA also picks up artifacts of undirected data collection. (?) Nevertheless, proper confirmatory experiments are often costly and time consuming, and a good EDA shows where those costs should best be spent. Importantly, EDA tells us whether a hypothesis is worth testing without telling us whether it is likely to be true: rather, it tells us that we are likely to get an answer for a suitably low cost. I link this idea to related work on tradeoffs between information costs in political economics (Stigler, 1961) and Bayesian search theory (Stone, 1976). The resulting position shows why previous positions have the plausibility they do, while providing a principled framework for developing and evaluating EDA techniques. Exploratory analysis: Between discovery and justification 01:30PM - 04:15PM
Presented by :
Ignacio Ojea Quintana, Australian National University With the advent of ‘Big Data’ came an abundance of computational and statistical techniques to analyze it, somewhat vaguely grouped under the label of `Data Science'. This invites philosophical reflection and systematization. In this paper we will focus on exploratory data analysis (EDA), a widespread practice used by researchers to summarize the data and formulate hypotheses about it after briefly exploring it. Using Reichenbach's (1938) distinction between context of discovery and context of justification, EDA seems to sit in between exploring in order to discover new hypotheses; and exploiting the data to justify doing confirmatory work. In this paper we will present different conceptualizations of it, shine on its importance, and suggest success conditions for it to be well functioning. The distinction between context of discovery and context of justification is well known and heavily discussed in the literature, albeit different authors provide different interpretations. By it we mean the distinction to be one about two aspects or features of the scientific practice, namely between the process of arriving at hypotheses, and the defense or validation of those hypotheses, the assessment of their evidential support - confirmatory work. One playful way of conceptualizing the difference, and the role that EDA plays in between these two contexts, is to model it as a trade-off between exploration and exploitation. Exploration allows for the discovery of new hypotheses, exploitation allows for assessing the evidential support of hypotheses, obtaining a reward in justification. This allows for a test for when EDA was done successfully, by balancing the trade-offs. Yet it might be objected that EDA has no place in confirmatory work, as Wagenmakers et al. (2012) emphasizes. In a nutshell, it would amount to using the data both to formulate and test hypotheses. I sympathize with this take, but it assumes a deflationary notion of justification. In the literature on epistemic justification, there are two broad tribes. On the one hand, foundationalist theories which in the chain of justifications defend that there are propositions that are self-evident (Descartes), axiomatic (Aristotle, Euclid), acquainted by experience (Russell) or (sense) data (empiricism), etc. In a nutshell, that there is a set of propositions that do not require others to be justified. On the other hand, coherentist theories defend the holistic idea that propositions can be justified by how coherent they are with one another (see Bovens and Hartmann (2003) for a Bayesian formulation of this notion). If justification is understood in a foundationalist vein, then Wagenmakers et al. are correct in arguing that EDA is flawed methodology. But if justification is understood in a coherentist way, or foundherentist one (a mixture between the two developed by Haack (1995)), then there is some role that EDA can play in the context of justification. On Clustering Algorithms and Natural Kinds 01:30PM - 04:15PM
Presented by :
Sarita Rosenstock, Postdoctoral Research Fellow, Australian National University How and to what end can we “derive” natural kinds and categories from large data sets? This is a question of interest to philosophers of science, natural scientists, and data scientists, who each offer rich but disciplinarily siloed insights. Cluster analysis refers to a variety of algorithmic processes aimed at identifying “clusters” in data sets; subsets of data points that are relevantly more “similar” to one another than to the larger data set. These algorithms are concrete, explicit artefacts that encode and apply a range of theories and intuitions about the purpose and nature of classification. Their computational specifications and theoretical justifications mirror the rich philosophical literature on classification and natural kinds and the roles they play in scientific understanding. Yet the synergies between these two literatures have been largely unexplored (excepting some insightful theoretical work by data scientists, including von Luxburg et. al., 2012 and Hennig, 2015). This paper aims to bridge this gap (especially on the philosophical side) by providing a comparative birds-eye view of both disciplinary conceptions of clustering and classification, drawing out areas of particular promise for future interdisciplinary research. I begin with a brief summary of the roles of classification in science and existing philosophical discussions on the nature, promise, and limitations of classificatory practices. I discuss the general role of classification in inductive inference, and mention some specific considerations that arise from the roles of classification in specific disciplines. I then survey the most common types of clustering algorithms employed by data scientists (largely drawing on (Xu and Wunsch 2005). I tease out their core theoretical assumptions and connect them to the conception of classification in philosophy of science. I proceed to consider the contexts in which such algorithms are implemented, where scientists’ discretion and contextual peculiarities provide a richer picture of how these clustering algorithms are understood and used by scientists. I pay particular attention to the philosophy of biology, where the role of data analysis has been discussed by philosophers (Leonelli 2016), and where scientists already engage with philosophical work on natural kinds (Boyd 1999). I conclude with a discussion of the ways in which data scientists and philosophers of science can both benefit from the lessons the other has to offer on the nature and purpose of classification. | ||
01:30PM - 04:15PM Birmingham | What good is a no-go theorem? Speakers
Adam Koberinski, Postdoc, Center For Philosophy Of Science, University Of Pittsburgh
Anthony Duncan, University Of Pittsburgh (emeritus)
Doreen Fraser, Reviewer, University Of Waterloo
Chris Mitsch, Nebraska Wesleyan University
David Freeborn, University Of California, Irvine
Marian Gilton, University Of Pittsburgh HPS
Moderators
Laura Ruetsche, University Of Michigan - Ann Arbor No-go theorems attract widespread interest in the philosophy of physics. These results from the foundations of physics are distinctive for their logical force and counter-intuitive implications. A classic example that has attracted much philosophical attention is Haag's theorem, a result in quantum field theory showing that a certain standard methods for modeling interacting fields, known as the interaction picture, rests on an inconsistent set of assumptions. This symposium takes focuses on the case of Haag's theorem in order to explore larger issues concerning the significance and value of no-go type theorems. the speakers will (i) present alternatives to reading it as a pessimistic, no-go result; (ii) explore the deeper implications of its generalizations; (iii) consider the roles of idealization and mathematical rigour in understanding the relationship between `bottom' and `top' physics; and (iv) highlight the recent results and future prospects for the rigorous construction of interacting quantum fields. Haag’s Theorem—a working physicist’s perspective 01:30PM - 04:15PM
Presented by :
Anthony Duncan, University Of Pittsburgh (emeritus) The perils of excessive idealization in constructing the underlying mathematical framework for fundamental physical theories are illustrated with some examples taken from relativistic quantum field theory: the triviality issue for standard model field theories, the nonexistence of the S-matrix in quantum electrodynamics, and Haag’s no-go theorem for the interaction picture formulation of relativistic field theories. It is argued that in all cases, known physical limitations of the theory, once taken into account, remove the apparent failure of the formalism, allowing phenomenologically relevant calculations to be made. Haag’s theorem and Cautious Optimism for the foundations of quantum field theory 01:30PM - 04:15PM
Presented by :
Adam Koberinski, Postdoc, Center For Philosophy Of Science, University Of Pittsburgh The tension between axiomatic, mathematically rigorous formulations of quantum field theory (AQFT) and Lagrangian quantum field theory (LQFT) as employed in the Standard Model of particle physics has been much discussed by philosophers and physicists alike. While debate was heated in the last decade (Fraser 2011, Wallace 2011), a sort of peaceful coexistence has been achieved. In the case of Haag’s theorem, [Duncan, 2012] and [Miller, 2018] offer similar characterizations of this coexistence: while Haag’s theorem undermines the interaction picture for a fully-Poincar´e QFT satisfying the Wightman axioms, LQFTs employ regularization and renormalization techniques that break Poincar´e invariance (among other things). Though the AQFT theorems are useful, they do not directly apply to LQFT, and philosophers must use new methods to better understand LQFT as used in practice. Algebraic/axiomatic methods may still lead to important insights into LQFT, but they are only useful insofar as they provide insight to LQFT [Wallace, 2006]. In this talk I will articulate and defend an attitude of Cautious Optimism for the relationship between AQFT and LQFT, and use Haag’s theorem as a test case. The Cautious Optimist thinks that AQFT (or some suitable modification) captures the essence of relativistic QFT. While it may be difficult or even impossible to construct exact models of the realistic interactions described with LQFT, the framework of AQFT provides insight and guidance to understanding QFT more broadly. Haag’s theorem in particular seems to provide a major obstacle to Cautious Optimism, more so than the failed attempts to explicitly relate AQFT and LQFT, as it is an explicit no-go theorem for using the interaction picture. I will show how one can reconcile the Duncan and Miller style solution to the dilemma of Haag’s theorem with a Cautious Optimism regarding the relationship between AQFT and LQFT. This requires a specific interpretation of regularization and renormalization techniques as calculational tools, rather than representing something physical about the Standard Model. This strategy preserves the relevance of AQFT results like the CPT theorem, and justifies the conventional understanding of QFT as the only way to construct relativistic quantum theory of particle interactions [Malament, 1996, Fraser, 2008] Finally, I end by discussing the value of Haag’s theorem for understanding the necessity of unitarily inequivalent representations in QFT. Generalizations of Haag’s theorem and their lessons for QFT 01:30PM - 04:15PM
Presented by :
Doreen Fraser, Reviewer, University Of Waterloo Haag’s theorem is a valuable result for foundations of physics because it supplies direct information about relativistic QFT, which is a framework theory. Constructing theories within this framework (e.g., lattice QCD, φ 2 4 , the Standard Model) using the wide variety of strategies found in mainstream and mathematical physics is a way of indirectly learning about the foundations of the framework theory. Philosophers and physicists studying the foundations of QFT need to piece together information from both direct and indirect sources. However, directly informative sources, such as Haag’s theorem, do have some advantages. Like other ‘no go’ theorems in physics, Haag’s theorem takes the logical form of a reductio argument. This means that there is a clear negative lesson about the wrong way to represent relativistic quantum systems, and also a set of possible positive lessons about the right way(s) to represent relativistic quantum systems. As with any reductio, the possible positive lessons are delimited by the choices of either rejecting one (or more) of the premises of the theorem or biting the bullet and accepting the apparently unacceptable conclusion. This clear logical picture is complicated by the fact that there are different versions of Haag’s theorem which appear to rest on different sets of premises. The most well-known version of the theorem is the one proven in [Hall and Wightman, 1957] using the Wightman axiomatization. The essence of this version is that in relativistic QFTs with interactions, vacuum polarization necessarily occurs. However, there are other versions of Haag’s theorem that are presented as generalizing the theorem beyond relativistic QFT. [Emch, 1972] proves a version of Haag’s theorem within the algebraic framework that is based on [Streit, 1969]. Streit remarks that this generalization “essentially consists in dropping not only locality [i.e., microcausality] but relativistic covariance altogether” (674). Another example is [Schrader, 1974]’s Euclidean version of Haag’s theorem within Euclidean field theory, which in some circumstances can be interpreted as classical statistical mechanics [Guerra et al., 1975]. What should we make of these results? As I will explain, there is a respect in which Haag’s theorem can be regarded as a deep general result about the representation of symmetries in framework theories that goes beyond relativistic QFT. I will also clarify the relativistic premises that are needed to prove Haag’s theorem within relativistic QFT. One reason that this clarification is important is that relativistic principles pose more than one obstacle to constructing QFTs with interactions. Distinguishing the obstacles can help to inform theory construction strategies. Haag as a how-to theorem 01:30PM - 04:15PM
Presented by :
David Freeborn, University Of California, Irvine
Marian Gilton, University Of Pittsburgh HPS
Chris Mitsch, Nebraska Wesleyan University Haag’s theorem is traditionally viewed as a no-go theorem for the mainstream physicists’ approach to interacting quantum field theory, i.e. the interaction picture and its attendant methods of perturbation theory. Mainstream quantum field theory employs the interaction picture to model interactions. In this interaction picture, interacting fields are modeled as perturbations of free fields. Once the fundamental assumptions of this approach are made mathematically precise, it follows from these assumptions that the putatively interacting field must in fact be unitarily equivalent to the free field. This result, demonstrating that the interacting field is equivalent to the free field, is called Haag’s theorem. Thus, much of the philosophical literature interprets Haag’s theorem as a classic no-go result: mainstream physicists’ methods for modeling interactions are a no-go because of the fundamental assumptions of the interaction picture. And yet, mainstream physicists’ methods (making use of the interaction picture, perturbation theory, and regularization and renormalization techniques) have proved to be highly successful at modeling interactions by empirical standards. In recent work, [Duncan, 2012] and [Miller, 2018] explain this success by appealing to the calculational detail of regularization and renormalization techniques, arguing that these techniques invariably violate one or another of the assumptions that go into Haag’s theorem. Thus, regularization and renormalization seem to provide an evasion strategy for Haag’s theorem, as well as an explanation for the empirical success of mainstream methods. In light of these developments, this paper presents an alternative to the no-go interpretation of Haag’s theorem: Haag’s theorem is rather a howto theorem. The two readings are distinguished by the status taken by the fundamental assumptions for the theorem. While on a no-go reading these assumptions are strictly immutable, on a how-to reading they are subject to revision. The central consequence of the assumptions’ change in status reveals itself when we consider the empirical success of mainstream models of interaction. On the no-go reading, one is tempted to dismiss this success as a mirage precisely because they controvert the theorem’s assumptions. In contrast, on the how-to reading, the success is taken as evidence that the assumptions require revision. In short: no-go entails no success because assumptions are true; how-to entails success but only if some assumption is false. Thus, the latter reading, but not the former, leads naturally to questions of how precisely the assumptions must be modified in order for the theorem to be evaded. It is in this sense that it is a how-to reading. Thus, a how-to reading relies upon the attitude of opportunism at work in what [R´edei and St¨oltzner, 2006] call “soft axiomatisation.” By way of conclusion, we offer some reflections as to the general methodological and philosophical implications of adjudicating between no-go and how-to interpretations of theorems such as this. | ||
01:30PM - 04:15PM Smithfield | Representation, Understanding, and Machine Learning: Large Language Models and the Imitation Game Speakers
Mike Tamir, Chief ML Scientist, Head Of ML/AI, UC Berkeley
Elay Shech, Secondary Author, Auburn University
Will Fleisher, Assistant Professor Of Philosophy, Georgetown University
Suzanne Kawamleh, Indiana University
Emily Sullivan, Eindhoven University Of Technology
Moderators
Conny Knieling, University Of Pittsburgh Over the past decade, scientific researchers and applied data scientists have steadily adopted machine learning (ML) techniques, particularly Deep Learning (DL) using highly parameterized deep neural networks. Trained estimators resulting from such ML processes, referred to as models, are now commonly used to either better estimate unknown features given a particular context or to improve understanding of said features given their respective contexts. Recently philosophical work has investigated the nature of such understanding from ML models. For example, Sullivan (2022) holds that the complexity of DL trained models means that they can be contrasted with the traditional use of idealization models, which ostensibly enable explanation or understanding by reducing complexity. Sullivan argues that appropriate analysis networks allow for "higher level" insight into these complex models even when the multitude of individual parameters leads to opacity at "lower levels." Large Language Models (LLMs), currently implemented in the form of large transformer based neural network architectures involving sometimes hundreds of billions to over a trillion parameters, have had remarkable success in significantly advancing if not in some cases "solving" traditional natural language processing challenges. Such challenges include abstractive summarization, text translation, traditional information extraction tasks, and text based question answering. Such LLMs have also contributed to leaps in the fluency, intelligibility, and continuity of machine generated dialogue tests, which have notably both renewed interest in and challenges to Turing's (1950) Imitation Game. In this work, we begin by considering whether and in what respects modern state of the art LLMs succeed and fall short at playing the Imitation Game. We show that impediments to winning this game can be categorized into technical and philosophical problems, and we argue that technical solutions can be provided to said philosophical problems. A substantive question of interest concerns how much understanding do the "higher level" learned representations found in machine learning models provide? We note that one can operationalize various key factors of understanding found in the philosophical literature to help answer said question. Specifically, similar to Sullivan, we argue that leveraging various technical analysis methods, commonly used by ML researchers to investigate hidden layer representations in neural networks generally and LLMs in particular, can lead to formulating testable hypotheses for (and against) the presence of various ostensible indicators of improved understanding in machines. Idealization, Machine Learning, and Understanding with Models 01:30PM - 04:15PM
Presented by :
Mike Tamir, Chief ML Scientist, Head Of ML/AI, UC Berkeley
Elay Shech, Secondary Author, Auburn University Over the past decade, scientific researchers and applied data scientists have steadily adopted machine learning (ML) techniques, particularly highly parameterized deep neural networks, Deep Learning (DL). Trained estimators resulting from such ML processes, referred to as models, are now commonly used to either better estimate particular unknown features given a particular context or to improve understanding of said features given their respective contexts. Recently philosophical work has investigated the nature of such understanding from ML models. Sullivan (2020) argues that the complexity of DL trained models means that they can be contrasted with the traditional use of idealization models, which ostensibly enable explanation or understanding by reducing complexity. In this work we explore the strength of this contrast, arguing that while the explicit functional form of particular highly parameterized DL trained models can be quite complex, such complexities are irrelevant to gains in explanation or understanding generated by DL models. We observe that framing the form of understanding gained from ML models as in Tamir & Shech (2022) enables an account of understanding from ML models that consequently illuminates both the nuances and failures of this contrast. Specifically, we propose that individual parameter instantiations resulting from ML training of particular models are best understood as approximations of the more general target phenomenon to be understood. We demonstrate that a proper analysis in which the contexts where approximation relationships break down are distinguished from those in which it can be sustained, enables us identify both sort of details irrelevant to understanding and the sort of higher level representations often captured by hidden layers of deep neural networks which may be leveraged for explanation or improved understanding. We show that hindrances to understanding from ML models due to parametrization complexity are analogous to infinite idealization dilemmas found in the philosophy of physics literature (Batterman 2002, Shech 2013). Drawing on Norton’s (2012) distinction between idealizations and approximations, we argue that our resolution of understanding from ML models despite parameterization complexity has important parallels with resolutions of said infinite idealization dilemmas, viz., Butterfield (2011), Norton (2012). We conclude with a unifying framework under which the success of accounts of understanding from highly parameterized ML models as well as understanding from some idealized models (including problematic infinite models) can be properly assessed. Idealization and Explainable AI 01:30PM - 04:15PM
Presented by :
Will Fleisher, Assistant Professor Of Philosophy, Georgetown University AI systems are being used for a rapidly increasing number of important decisions. Many of these systems are “black boxes”: their functioning is opaque both to the people affected by them and to those developing them. This opacity is often due to the complexity of the model used by the AI system, and to the fact that these models are using machine learning techniques (Burrell 2016, Sullivan 2020). Black box AI systems are difficult to evaluate for accuracy and fairness, seem less trustworthy, and make it more difficult for affected individuals to seek recourse for undesirable decisions. Explainable AI (XAI) methods aim to alleviate the opacity of complex AI systems (Lakkaraju et al. 2020). These methods typically involve approximating the original black box system with a distinct “explanation model”. The original opaque model is used for actual recommendations or decision-making. Then, the explanation model provides an explanation for the original model’s output. However, there is debate about whether such methods can provide adequate explanations for the behavior of black box AI systems. This debate is made difficult by lack of agreement in the literature concerning what it means to give an adequate explanation. I argue that the goal of XAI methods should be to produce explanations that promote understanding for stakeholders. That is, a good explanation of an AI system is one that places relevant stakeholders in a position to understand why the system made a particular decision or recommendation. Moreover, I suggest that XAI methods can achieve this goal because (when things go well) the explanation models they produce serve as idealized representations of the original black box model. An idealization is an aspect of a scientific model that deliberately misrepresents its target to enable better understanding of that target (Elgin 2017). Even though idealizations are false, they can promote understanding by conferring a variety of benefits on a model (Potochnik 2017). An idealized model can be simpler, can leave out unimportant information, and can highlight specific causal patterns that might otherwise be obscured by the complexity of the system being represented. Recognizing that XAI methods produce idealized models can help illuminate how these methods function. This recognition can also guide decisions on when and whether specific methods should be employed. Certain kinds of idealizations will be apt for explaining a particular black box model to a particular audience. This in turn will help determine which XAI methods should be employed for providing those explanations. Whether an idealization is appropriate will depend on what benefits it will confer on an idealized model. For instance, consider feature importance methods that use linear equation models, such as LIME (Ribeiro et al 2016). These XAI methods employ idealizations that confer simplicity and legibility on the resulting explanation model. They eliminate information about causally unimportant features, while highlighting relevant causal patterns that are important for determining the original model’s output. These idealizations serve to promote understanding for non-technical stakeholders affected by an XAI system. Artificial Tradeoffs in Artificial Intelligence 01:30PM - 04:15PM
Presented by :
Suzanne Kawamleh, Indiana University A central problem for machine learning (ML) models is that they are “black boxes” and epistemically opaque. This means the inner workings of these models—how the model internally represents the data to reach a certain decision—are opaque or a “black box” to experts. This is concerning in health-care settings where such models are increasingly being used autonomously for high-stakes decision making. These concerns have led to a growing legal and ethical demand that the ML models be explainable if used in safety-critical domains. Explanations often require describing how the model represents the data or what the machine "sees" when it uses data to make a prediction. However, it is widely accepted that ML models are subject to an inherent and general tradeoff between predictive performance and explainability. The argument for the Tradeoff Thesis is based on model complexity. A more complex model is more accurate because of its complexity: it can train on, represent, and learn from a larger body of complex data. A more complex model (like a neural network) is less explainable because it combines that data using nonlinear functions, over multiple layers, and iteratively updates its outputs to optimize predictive skill. In contrast, a simpler model (like a decision tree) is more explainable in virtue of the rules encoded by human scientists but exhibits poorer predictive performance because of its rigidity. This Tradeoff Thesis reflects a long-standing philosophical position that describes prediction and explanation as two distinct, and often competing, theoretical virtues or epistemic goals. I challenge the Tradeoff Thesis using a case study of two deep learning systems that diagnose eye disease using retinal images. I then use my study of how explanation facilitates improved predictions in medical AI to support Heather Douglas’s (2009) argument for the tight practical and functional relation between prediction and explanation. In a case study, I demonstrate that improvements in the explainability of a deep learning system that uses representations of retinal lesions to detect diabetic retinopathy leads to improvements in predictive skill when compared to earlier studies that used simpler and more opaque models. I argue that the improved explainability facilitates improved predictive performance and that increased complexity is compatible with explainability. Furthermore, I compare explanations of DeepDR and its predictions with those of human ophthalmologists. I show how the explainability of DeepDR is on par with medical explanations provided by human doctors. An important consequence of my findings is that the Tradeoff Thesis must be proven to hold within a circumscribed set of models and cannot be presumed to hold rather generically “for all current and most likely future approaches to using ML for medical decision-making” (Heinrichs & Eickhoff 2020, 1437). Furthermore, this case illustrates how, in practice, prediction and explanation are deeply connected. This poses a challenge for philosophical models which construe the relation between prediction and explanation as one of epistemic rivals. Therefore, complex ML algorithms may still hold promise for reliable and ethical deployment in safety-critical fields like medicine. . Do Machine Learning Models Represent their Targets? 01:30PM - 04:15PM
Presented by :
Emily Sullivan, Eindhoven University Of Technology One way machine learning (ML) modeling is different from more traditional modeling methods is that they are data-driven, instead of what Knüsel and Baumberger (2020) call process driven. Moreover, ML models suffer from a higher degree of model opacity compared to more traditional modeling methods. Despite these differences, modelers and philosophers (e.g. Sullivan 2020, Meskhidze 2021) have claimed that ML models can still provide understanding of phenomena. However, before the epistemic consequences of opacity become salient, there is an underexplored prior question of representation. If ML models do not represent their targets in any meaningful sense, how can ML models provide understanding? The problem is that it does in fact seem as though ML models do not represent their targets in any meaningful sense. For example, the similarity view of representation seems to exclude the possibility that ML models can represent phenomena. ML models use methods of finding feature relationships that are highly divorced from their target systems, such as relying on decision-rules and loose correlations instead of causal relationships. Moreover, the data that models are trained on can be manipulated by modelers in a way that reduces similarity. For example, the well-known melanoma detection ML model (Esteva et al. 2017) augments the RBG spectrum of dermatologist images (Tamir and Shech 2022). Thus, if the similarity view is right, then even if model opacity qua opacity does not get in the way of understanding, ML models may still fail to enable understanding of phenomena because they fail to represent phenomena. Contra to the similarity view, I argue that ML models are in fact able to represent phenomena, under specific conditions. Drawing on the literature of how highly idealized models represent their targets, and the interpretative view of representation (Nguyen 2020), a strong case can be made that ML models can accurately represent their targets. Even though ML models seem to be the opposite of highly idealized simple models, there are a number of representational similarities between them. Thus, if we accept that highly idealized models can represent phenomena, then so can ML models. References Knüsel, B., and Baumberger, C. (2020): Understanding climate phenomena with data-driven models. Studies in History and Philosophy of Science Part A, 84, 46-56. Meskhidze, H. (2021). Can Machine Learning Provide Understanding? How Cosmologists Use Machine Learning to Understand Observations of the Universe. Erkenntnis, 1-15. Nguyen, J. (2020). It’s not a game: Accurate representation with toy models. The British Journal for the Philosophy of Science, 71(3), 1013-1041. Sullivan, E.(2020): Understanding from Machine Learning Models. In The British Journal for the Philosophy of Science. DOI: 10.1093/bjps/axz035. Esteva, A.; Kuprel, B.; Novoa, R. A.; Ko, J.; Swetter, S. M.; Blau, H. M.; Thrun, Seb. (2017): Dermatologist-level classification of skin cancer with deep neural networks. In Nature 542 (7639), pp. 115–118. DOI: 10.1038/nature21056. | ||
01:30PM - 04:15PM Sterlings 1 | Climate Sensitivity, Paleoclimate Data, & the End of Model Democracy Speakers
Leticia Castillo, Ph.D., Boston University
Gavin Schmidt, NASA GISS
Alisa Bokulich, Boston University
Wendy Parker, Virginia Tech
Joel Katzav, University Of Queensland
Moderators
Carlos Santana, University Of Utah Equilibrium Climate Sensitivity (ECS) characterizes the response of Earth's temperature to a doubling of atmospheric CO2 and is one of the most important and most studied metrics in climate science. For decades, estimates of ECS have been stable around 1.5°C to 4.5°C. In the most recent coupled model intercomparison project (CMIP6), however, many state-of-the-art climate models calculated ECS to be "hotter" than the upper bound of the consensus range; if correct, this would mean even more dire consequences for our planet than previously anticipated. The surprising CMIP6 results quickly became one of the highest-profile issues in climate science and a focus of intensive research, as scientists tried to determine why the models produced these unexpected results and whether they were erroneous. Our symposium explores several key epistemological and methodological issues arising from this high-profile case: the handling of discordant results; the validation of paleoclimate data used in both climate model evaluation and estimating ECS; the interpretation of climate model projections; holism and underdetermination in complex simulation models; and the end of climate science's long-standing practice of "model democracy", in which each state-of-the-art model gets equal weight in assessments of future warming. A possibilistic epistemology of climate modeling and its application to the cases of sea level rise and climate sensitivity 01:30PM - 04:15PM
Presented by :
Joel Katzav, University Of Queensland It has been argued that possibilistic assessment of climate model output is preferable to probabilistic assessment (Stainforth et al. 2007; Betz 2010, 2015; Katzav 2014; Katzav et al. 2012 and 2021). I aim to articulate a variant of a possibilistic approach to such assessment. On my variant, the output of climate models should typically be assessed in light of two questions: is it fully epistemically possible? If the output is (fully) epistemically possible, how remote a possibility does it represent? Further, on my variant, if the output is judged to be epistemically possible, it should be taken to represent objective possibilities, specifically potentialities of the actual climate system. Having articulated my possibilistic approach, I apply it to two key issues in climate science, namely the potential contribution of marine ice cliff instability to sea level rise over the rest of the twenty-first century and climate sensitivity. Marine ice cliff instability (MICI) has been posited as a mechanism that might lead to substantially more sea level rise than had previously been projected (DeConto and Pollard 2016). I will suggest that the existing assessment of the contribution of MICI to future sea level rise illustrates the strengths of my possibilistic approach and weaknesses of probabilistic approaches to assessing the output of climate models. I will also argue that the most recent Intergovernmental Panel on Climate Change assessment of climate sensitivity, especially its reliance on a variety of evidence considerations to address the challenges of unexpectedly high climate sensitivity projections by state-of-the-art climate models, illustrate the strength of my possibilistic approach and the weakness of probabilistic approaches. The climate science community’s response to discordant results 01:30PM - 04:15PM
Presented by :
Gavin Schmidt, NASA GISS It is well known that the path to greater precision in physics is not smooth. Because differences in subsequent experiments often fall outside the nominal uncertainties of the prior art, science often has to deal with discordance that stimulates increased focus on what were presumed to be small effects. Examples include the history of measurements of ‘Big G’ (the gravitational constant) and the charge of the electron (Bailey, 2018). In climate science, numerous examples can also be found, ranging from the ‘global cooling’ inferred from new satellite measurements in the 1990s, estimates of the mass balance of Antarctica in the 2000s, and the increased spread of climate sensitivity in the latest CMIP6 model intercomparison. Resolutions for these discordant results are not predictable a priori - systematic issues can affect new measurements and old measurements alike, and comparisons may not be fully compatible. While resolutions are still pending though, the broader community may not have the luxury of simply waiting for the reasons to be discovered. I will discuss how and why the climate science community is dealing with the “climate sensitivity issue” in the meantime. Paleoclimate Proxy Data: Uncertainty, Validation, & Pluralism 01:30PM - 04:15PM
Presented by :
Alisa Bokulich, Boston University Paleoclimate proxy data are playing an increasingly central role in contemporary climate science. First, proxy data about key paleoclimates in Earth’s history can be used to benchmark the performance of state-of-the-art climate models by providing crucial “out of sample” tests. Paleoclimates provide data about the response of the Earth to climate states and forcing scenarios that are very different from those provided by the limited historical (i.e., instrument) record (which has hitherto provided the basis for building, tuning, and testing current climate models). These tests, which have most recently been undertaken by the Paleoclimate Model Intercomparison Project 4 (PMIP4) in coordination with CMIP6, will be increasingly important for developing climate models that can reliably forecast a future where anthropogenic forcing has perturbed the Earth out of the climate state represented by the historical record (Kageyama et al. 2018). Second, paleoclimate proxy data can also be used more directly to provide an estimate for quantities such as equilibrium climate sensitivity (ECS). Although ECS used to be estimated on the basis of the values provided by climate models, since the fourth assessment report (AR4) both paleoclimate proxy data and (instrument) data from historical warming have provided additional observational constraints on ECS values. In the most recent AR6, which was published last year, model-based estimates of ECS from the CMIP6 models were for the first time excluded from the evidential base for estimating climate sensitivity. Instead, the current official estimate for ECS was derived only on the basis of the following three independent lines of evidence: process understanding about feedbacks, the historical climate record, and the paleoclimate record (Sherwood et al. 2020; IPCC AR6, Chapter 7). Given their increasing importance for climate research, paleoclimate proxy data are ripe for philosophical analysis. Despite their role as data for testing climate models and as observational evidence for a value of climate sensitivity, it must be emphasized that paleoclimate data are themselves a complex, model-laden data product, involving many layers of data processing, data conversion, and data correction (Bokulich 2020). Hence, there are many sources of uncertainty in paleoclimate data that arise along the path from local proxy measurements of traces left in geologic record to global paleoclimate reconstructions of Earth’s deep past. To realize their potential, questions about how to validate paleoclimate data must be confronted. In this talk I develop a multi-procedure framework for validating (or evaluating) proxy data, analogous to the frameworks used for model evaluation. I further argue that paleoclimate data must be evaluated as adequate or inadequate for particular purposes (Bokulich and Parker 2021). Finally, I highlight the importance of data pluralism in the form of multiple data ensembles derived from different possible ways of processing the data. Although developed in the context of paleoclimate proxy data, the data-validation framework I provide here can be generalized to apply to data evaluation in other scientific contexts. Fixing High-ECS Models: The Problem of Holism Revisited 01:30PM - 04:15PM
Presented by :
Leticia Castillo, Ph.D., Boston University Equilibrium Climate Sensitivity (ECS) is a key metric when trying to understand the past, present and future behavior of Earth’s climate. Several models used in the latest IPCC report’s Coupled Model Intercomparison Project 6 (CMIP6) have failed to yield an ECS value within the consensus range estimated by several previous climate models (IPCC AR6, Chapter 7). Trying to understand why these state-of-the-art models failed to give an appropriate ECS value is no easy task. Johannes Lenhard and Eric Winsberg (2010, 2011) have argued that complex simulation models such as climate models exhibit a kind of epistemological holism that make it extremely difficult—if not impossible—to tease apart the sources of error in a simulation and attribute them to particular modeling assumptions or components. As a result, they argue that modern, state-of-the-art climate models are “analytically impenetrable” (Lenhard & Winsberg, 2011, p. 115). They identify as a source of this impenetrability what they call “fuzzy modularity,” which arises due to the complex interactions between the modules that make up a climate model. The question remains whether a model’s analytical impenetrability undermines scientists' efforts to identify the cause of the high ECS values and fix these models through a piecemeal approach. Despite these worries about analytical impenetrability and holism, scientists use sensitivity tests which involve replacing individual parameterizations, schemes, or process representations one-by-one in a piecemeal fashion to assess their impact on a model output quantity, such as ECS. Through sensitivity tests, scientists concluded the high ECS values in many climate models were likely due to more realistic parameterizations of cloud feedback (Gettelman et. al., 2019, Zelinka et. al., 2020). This is surprising because the models used in CMIP6 have a better representation of the current climate but the increased realism in cloud parameterization yields an unrealistic result in ECS. How is it that more realistic models can get worse results? Also, if the modules of a model are inextricably linked, how can scientists use sensitivity tests to find what is wrong and fix the model? It might be that fixing cloud parameterization only works because of compensating factors elsewhere. For example, radiative forcing may be compensating for the model’s climate sensitivity (Kiehl, 2007). The recent failure of models to yield an appropriate ECS value presents us with an opportunity to revisit concepts such as holism, realism, and underdetermination (also called equifinality) in current climate models. In this talk, I focus on attempts to diagnose the source of the high ECS in some CMIP6 models, Community Earth System Model 2 in particular, using techniques such as sensitivity testing and feedback analysis. While these techniques can go a long way towards addressing holism, there are limits to their applicability, which I discuss. I conclude by drawing some broader lessons about the more subtle relations between holism, fuzzy modularity, and underdetermination in complex simulation models. When is a model inadequate for a purpose? 01:30PM - 04:15PM
Presented by :
Wendy Parker, Virginia Tech Equilibrium climate sensitivity is a measure of the sensitivity of earth’s near-surface temperature to increasing greenhouse gas concentrations. When numerous state-of-the-art climate models recently indicated values for climate sensitivity outside of a range that had been stable for decades, climate scientists faced a dilemma. On the one hand, these high-sensitivity models had excellent pedigrees, incorporated sophisticated representations of physical processes, and had been demonstrated to perform more than acceptably well across a range of performance metrics; their developers considered them at least as good as, or even a significant improvement upon, previous generations of models. The common practice of “model democracy” would suggest giving their results equal weight alongside those of other state-of-the-art models. On the other hand, doing so would generate estimates of climate sensitivity and future warming substantially different from – and more alarming than – estimates developed over decades of previous investigation. Faced with this situation, climate scientists sought to further evaluate the quality of the CMIP6 models. I will show how their efforts, and their subsequent decisions to downweight or exclude some models when estimating future warming, but not when estimating some other variables, illustrates an adequacy-for-purpose approach to model evaluation. I will also critically examine some of the particular evaluation strategies and tests employed, with the aim of extracting some general insights regarding the evaluation of model inadequacy. | ||
03:00PM - 03:15PM Virtual Room | Coffee Break | ||
04:30PM - 06:30PM Kings 5 | President's Plenary - "Science without Scientists: Could Science be Automated?” Advances in artificial intelligence have raised the possibility that important areas of scientific research, including experiment design and theory creation, might be automated. One prominent example with potentially widespread societal implications is efforts to automate drug discovery. The possibility of automated science seems to hold the promise of rapid acceleration of scientific progress, especially since the past few decades have revealed complexities in many scientific subjects that are difficult for human scientists to grasp. It also raises many concerns. Is automated science capable of thinking outside the box created by its learning process? Does automation limit scientific possibilities compared to human efforts? What will be the future role(s) of human scientists? What are the possible consequences of incomplete human understanding of scientific results, with significant potential consequences for the distribution of responsibility? In this forum, a distinguished panel of experts from philosophy and science will discuss these and other issues.Panelists:Professor Robert F. MurphyRay and Stephanie Lane Emeritus Professor of Computational Biology, Carnegie Mellon UniversityProfessor Alex John LondonClara L. West Professor of Ethics & Philosophy and Director of the Center for Ethics and Policy, Carnegie Mellon UniversityDr. Atoosa KasirzadehChancellor's Fellow in Philosophy and the Futures Institute, University of EdinburghModerators:Professor David DanksProfessor of Data Science & Philosophy, University of California San DiegoProfessor John DupréProfessor of Philosophy and Director of Egenis, The Centre for the Study of Life Sciences, University of Exeter | ||
05:00PM - 08:30PM Kings Garden 1, 2 | Book Exhibit | ||
06:30PM - 07:30PM Kings 3, 4 | PSA Opening Reception | ||
07:30PM - 09:00PM Kings 5 | PSA2022 Public Forum - Community-Led Public Health Environmental pollutants affect all of us, but not all of us equally. How can local community knowledge and values help proactively shape research and policy to improve our public health? The PSA Public Forum invites interested people to join a conversation with scientists and philosophers of science to consider these issues.Speakers:Maureen Lichtveld, MD, MPH - Dean, School of Public Health; Professor, Environmental and Occupational Health; Jonas Salk Chair in Population Health, University of PittsburghAnya Plutynski - Professor of Philosophy, Washington University in St. LouisKevin Elliott - Professor, Lyman Briggs College, Department of Fisheries and Wildlife, and Department of Philosophy; Michigan State UniversityModerator:Sandra D. Mitchell - Distinguished Professor, Department of History and Philosophy of Science, University of Pittsburgh | ||
09:15PM - 10:30PM SkyLounge | PSA2022 Welcome Reception (Grads, Early Career, New Attendees) |
Day 2, Nov 11, 2022 | |||
08:00AM - 09:00AM Kings 3, 4 | Editorial Board Breakfast | ||
08:30AM - 06:00PM Kings Terrace | Nursing Room | ||
08:30AM - 06:00PM Kings Plaza | Childcare Room | ||
08:30AM - 06:00PM Kings Garden 1, 2 | Book Exhibit | ||
09:00AM - 11:45AM Duquesne | Philosophy of Machine Learning in light of the History of Philosophy Speakers
Cameron Buckner, University Of Houston
Hayley Clatterbuck, Reviewer, University Of Wisconsin-Madison
Kathleen Creel , Assistant Professor, Northeastern University
Jan-Willem Romeijn, University Of Groningen
Cameron Clarke, New York University
Moderators
Anna-Mari Rusanen, University Of Helsinki In this symposium, we use the history of philosophy to illuminate specific, grounded aspects of contemporary practice in machine learning, raising new problems and proposing new frameworks for the philosophy of machine learning. Our symposium draws on 200 years of empiricism and its critics, ranging from eighteenth century's Hume, Smith, de Grouchy, Locke, and Leibniz to the twentieth century's Carnap, Putnam, and Goodman. Our methodology is shared with contemporaneous work in the philosophy of machine learning such as Buckner (2018, 2020), Chirimuuta (2020), Haas (2022), Nefdt (2020), Sterkenburg & Grunwald (2020). Our five topics bear a family resemblance to one another. Three (Buckner, Creel, and Clatterbuck) draw on historical themes of empiricism and rationalism. Three (Buckner, Creel, and Romeijn) address the conditions necessary for machine systems to succeed at learning from data produced by humans. Two (Buckner and Clarke) provide concrete proposals for creating systems or agents that can model, understand, and intervene in the social world. Two (Creel and Romeijn) address the preconditions and limitations of automated science and inference. And two (Clatterbuck and Clarke) characterize existing debates in machine learning within a broader historical frame, allowing the central questions to be productively re-oriented. Hume’s Externalizing Gambit 09:00AM - 11:45AM
Presented by :
Hayley Clatterbuck, Reviewer, University Of Wisconsin-Madison From empiricist sentimentalism to moral machines: How empiricist moral psychology can inform artificial intelligence 09:00AM - 11:45AM
Presented by :
Cameron Buckner, University Of Houston Counterpossibles and social (scientific) counterfactuals 09:00AM - 11:45AM
Presented by :
Cameron Clarke, New York University Machine learning, or: the return of instrumentalism 09:00AM - 11:45AM
Presented by :
Jan-Willem Romeijn, University Of Groningen Machine Molyneux Problems 09:00AM - 11:45AM
Presented by :
Kathleen Creel , Assistant Professor, Northeastern University | ||
09:00AM - 11:45AM Sterlings 1 | The Representational Theory of Measurement and Physical Quantities Speakers
Marissa Bennett, Graduate Student, University Of Toronto
Zee Perry, NYU Shanghai
Eran Tal, McGill University
Jo Wolff, University Of Edinburgh
Moderators
Michael Miller, University Of Toronto Quantities are central to a number of important facets of scientific practice. They are the properties over which our theories generalize, and which many of our experiments provide measurements of. Our contemporary understanding of quantities stems in large part from the Representational Theory of Measurement (RTM). Perhaps the central achievement of this theory is that it provides a compelling account of the conditions under which an attribute can be represented numerically. While this is a critical component of an analysis of quantities, RTM adopts a number of substantive assumptions and it leaves open a number of critical issues. This symposium brings together several of the leading figures in recent discussions of physical quantities with the aim of interrogating these assumptions and facing up to these open issues. Does RTM offer a reductionist approach to quantitativeness? 09:00AM - 11:45AM
Presented by :
Jo Wolff, University Of Edinburgh The Representational Theory of Measurement (RTM) offers a formal theory of measurement, with measurement understood as a homomorphic mapping between two types of structure: an empirical relational structure on the one hand, and a numerical structure on the other. These two types of structure are characterised axiomatically, as sets with certain relations defined on them. For a quantitative attribute like mass, for example, we find an empirical relational structure of weights with ordering and concatenation relations defined over them, and a numerical structure provided by the real numbers, less-than, and addition to represent the empirical relational structure. The numerical structure serves merely as a representational tool to capture the relationships between the weights; and the mathematical relations of ordering and addition are interpreted concretely as physical orderings and concatenations in the context of particular measurement operations. RTM has sometimes been interpreted as offering a kind of reductionist approach to quantitativeness, for two reasons: 1. RTM takes numbers to play a purely representational role in measurement 2. RTM takes a permissivist view of numerical representations: many kinds of attributes can be numerically represented, not just traditional quantities, like length or mass Insofar as we equate quantitativeness with being numerical, it would seem that RTM takes a reductionist view of quantitativeness, because it takes a deflationary view of numerical representation: the only thing you lose if you omit numerical representations is convenience. I argue here that, on the contrary, RTM not only does not commit us to a reductionist view of quantitativeness, but in fact provides us with a novel criterion for quantitativeness, which shows why reductionism about quantitativeness is so difficult. The first part of my argument rejects the view that quantitativeness is best understood as being numerical. RTM demonstrates quite clearly that numerical representation is neither necessary nor sufficient for an attribute's being quantitative. It is not sufficient, because many intuitively non-quantitative properties can be represented numerically using the tools of RTM; in general, numerical representability is pretty easy within the RTM framework. It is not necessary, because RTM itself shows how empirical relational structures can be represented non-numerical (for example geometrically). Having rejected the claim that quantitativeness means being numerical, I then show in part 2 of my argument that RTM in fact provides a novel criterion for quantitativeness. This proceeds in two steps: first I show how uniqueness theorems provide a reason for thinking that only some numerical representations are representations of quantities, and second, how we can characterise the structures amenable to such representations using the resources of RTM. This yields a criterion for quantitativeness as a feature of certain kinds of structures. Since RTM's own conception of measurement is that of a homomorphic relationship between two structures, we shouldn’t expect one of these structures to count as quantitative by this new criterion, while the other one is not. Reducing quantitativeness is harder, not easier from the perspective of RTM. Who Needs Magnitudes? 09:00AM - 11:45AM
Presented by :
Eran Tal, McGill University This paper examines the importance of the concept of magnitude to the philosophy of measurement. Until the mid-twentieth century, magnitude was a central concept in theories of measurement, including those of Kant (1781 A162/B203), Helmholtz (1887), Hölder (1901), Russell (1903, Chapter XIX), Campbell (1920) and Nagel (1931). In the 1950s, the concept of magnitude began to fade from discussions on the foundations of measurement. The standard presentation of the Representational Theory of Measurement (Krantz et al., 1971) does not mention magnitudes. Similarly, the International Vocabulary of Metrology analyzes measurement by using the concepts of quantity and quantity value, with scarce reference to magnitudes (JCGM, 2012). This paper argues that the concept of magnitude is an important component of any satisfactory theory of measurement, and that it is not reducible to the concepts of quantity, number, and quantity value. I begin by showing that numbers cannot be assigned directly to objects or events, but only to magnitudes, which are aspects of objects or events that admit of ordering from lesser to greater. Building on Wolff (2020), I use the determinable-determinate distinction to analyze the relation between quantities and their magnitudes. I then show how the concept of magnitude can be used to resolve two ongoing debates concerning the foundations of measurement: (1) the debate concerning the nature of measurement units; (2) the debate concerning the scope and limits of the Representational Theory of Measurement. Discussions concerning the nature of units date back at least to the early nineteenth century, when they were central to the development of the analytical theory of heat (Roche, 1998, Chapter 8; de Courtenay, 2015). Recently, these debates re-emerged as part of the drafting of the ninth edition of the Brochure of the International System of Units (Mari \& Giordani, 2012; Mari et al., 2018; BIPM, 2019). The debating parties disagreed on whether units are best understood as quantities or as quantity values. I argue instead that units are best viewed as magnitudes. My proposal generalizes across different modes of unit definition (e.g. by reference to specific objects, kinds of objects, and theoretical constants), and leads to a straightforward understanding of quantity values as mathematical relations among magnitudes. The concept of magnitude similarly sheds light on debates concerning the scope and limits of the Representational Theory of Measurement (Baccelli, 2018; Heilmann, 2015). RTM axioms can be interpreted in at least two ways: as models of data gathered by empirically investigating concrete objects, or as conceptual relationships among magnitudes. The first interpretation, advanced by Patrick Suppes and Duncan Luce, runs into difficulties when applied to real data structures, which are often less well-ordered than RTM allows. I argue that once RTM axioms are reinterpreted as expressing relations among magnitudes, these problems are successfully avoided, and the most important accomplishments of RTM are preserved. These examples show the centrality of the concept of magnitude to the study of the foundations of measurement. Against Quantitative Primitivism 09:00AM - 11:45AM
Presented by :
Zee Perry, NYU Shanghai In this paper, I introduce a novel approach to a problem that is, in the dominant literature, often thought to admit of only a partial solution. The problem of quantity is the problem of explaining why it is that certain properties and relations that we encounter in science and in everyday life, can be best represented using mathematical entities like numbers, functions, and vectors. We use a real number and a unit to refer to determinate magnitudes of mass or length (like 2kg, 7.5m etc.), and then appeal to the arithmetical relations between those numbers to explain certain physical facts. I cannot reach the coffee on the table because the shortest path between it and me is 3ft long, while my arm is only 2.2ft long, and 2.2< 3. The pan balance scale does not tilt because one pan holds a 90g tomato while the other holds two strawberries, of 38g and 52g respectively, and 38+52=90. While they provide a convenient way to express these explanations, the arithmetical less than relation, or the `+' and `x' operations on the real numbers are not really part of the physical explanations of these events. They just represent explanatorily relevant features inherent in the physical systems described. To solve the problem of quantity is to provide an account of this ``quantitative structure'', those physical properties and relations really doing the explaining. The vast majority of approaches in the literature have limited themselves to a much less ambitious project: Rather than explain quantitativeness in its entirety, they strive to leave ``only'' a small amount of quantitativeness unexplained. Primitivism about quantitativeness, or quantitative primitivism, is the position that some quantitative structure cannot be explained. I will argue that the problem of quantity, by its very nature, does not admit of any partial solutions. A reductive-explanatory account of quantitativeness is specifically one that provides an adequate explanation of quantitative struture without leaving any quantitative structure as an unexplained, primitive posit. This is done by reducing the quantitative structure to a more fundamental, non-quantitative base. Non-primitivist accounts also allow for a novel dissolution of a problem which has dominated contemporary debates about the metaphysics of quantity, the debate between ``absolutists'' who think that the fundamental quantitative notions are properties (like ``weighs 5g'' or ``is 2m long''), and and ``comparativists'' according to whom the fundamental notions are comparative relations (like ``is twice as massive as'' or ``is 2m shorter than''). This dispute, I argue, only makes sense from a primitivist perspective. For the non-primitivist, there is no debate to be had. There is no fundamental quantitative structure, and so there is no room for a dispute about what kind of fundamental quantitative structure we accept. The underlying intuitions which guide much of these debates (for example about whether things would be different if everything’s mass was doubled) can still be understood by the non-primitivist. Indeed, non-primitivist accounts can give a clearer and more explanatory judgement on these cases than any primitivist theory could. The Conventionality of Real-Valued Quantities 09:00AM - 11:45AM
Presented by :
Marissa Bennett, Graduate Student, University Of Toronto Non-discrete quantities such as mass and length are often assumed to be real-valued. Rational-valued measurement outcomes are typically thought of as approximations of the `real' values of their target quantity-instances. For example, the representational theory of measurement (RTM) models measurement as the construction of a function that sends a set of objects obeying certain qualitative axioms into the real numbers, such that the structure of the relations holding among the objects is preserved by the order and addition relations on the real numbers. The original architects of the modern version of RTM (Krantz et. al.) clearly acknowledge that this choice of representing mathematical structure is conventional, being influenced by pragmatic considerations related to computational simplicity, and they consider alternative representing structures that illustrate this conventionality. But whereas operations alternative to ordinary addition for additive measurement are considered, sets alternative to the real numbers are not. The formal results of RTM have recently been applied in formulating realist views of quantity, but the assumption that the real numbers are best suited for representing the structure of non-discrete quantities has not yet been examined. At the core of the standard RTM representation and uniqueness theorems is Hölder's theorem, which Hölder originally proved from a set of axioms that he regarded as ``the facts upon which the theory of measurable (absolute) quantities is based''. These axioms include Dedekind's axiom of continuity, reflecting the close conceptual connections between the real numbers and `continuous' quantities. Hölder regarded quantities as having magnitudes as axiomatized by Euclid, and understood Euclid's definition of proportion in terms of Dedekind cuts. Krantz et. al. adapt Hölder's theorem for their operationalization of quantitative concepts and construct measurement scales that are real-valued, but replace Dedekind's axiom with the Archimedean axiom, which they see as better-suited to their empiricist interpretation. But even on a realist understanding of quantity, we argue, there are good reasons to doubt the assumption that classical physical quantities are genuinely continuous. Our paper first reproduces the results of Krantz et. al. in a realist context, where a quantity and its magnitudes are understood in terms of determinables and their determinates. Understanding a physical quantity (such as mass) as a determinable property emphasizes the metaphysical significance of representing it as having the structure of the reals. We then prove analogous representation and uniqueness theorems thus establishing that a determinable quantity constrained by the same qualitative axioms can be represented by the rational numbers. This shows that RTM methods do not inherently provide justification for representing a quantity as having the structure of the reals, and that the appearance of such justification can be attributed to stipulations of either continuity or uncountability of the non-numerical target of representation. We argue that, if the formal results of RTM are to inform a metaphysical view of quantity, then the conventionality of the choice of the real numbers as the representing structure needs to be explicitly justified. | ||
09:00AM - 11:45AM Sterlings 3 | The Science of Diversity and Diversity in Science Speakers
Liam Kofi Bright, Presenter, London School Of Economics
Cailin O'Connor, Presenter, UC Irvine
Hannah Rubin, Presenter, University Of Notre Dame
Jingyi Wu, University Of California, Irvine
Erin Hengel, Presenter, London School Of Economics
Moderators
Rebecca Korf, University Of California, Irvine Is social and cognitive diversity beneficial for scientific knowledge production? How do we promote diversity in science? Are there gender or racial gaps in productivity, quality, or citation in academic publications? What are the potential causes for such gaps, and how do we close them? In the past few years, a new subfield in philosophy of science has emerged, where researchers use scientific methods to study knowledge production in scientific communities, paying special attention to the roles of social and cognitive diversity. This work is often methodologically continuous with other disciplines such as economics, sociology, and biology. It is also topically continuous with traditional work in social and feminist epistemology as well as other disciplines that contribute to "the science of science." More recently, several philosophers of science have also begun to apply similar methods to study diversity and inequity beyond academia. In this symposium, we bring together some of the newest work from multiple disciplines that employs a variety of scientific methods to study diversity in and outside of science. Social Dynamics and the Evolution of Disciplines 09:00AM - 11:45AM
Presented by :
Hannah Rubin, Presenter, University Of Notre Dame Why do scientific disciplines appear, disappear, merge together, or split apart? We might point to major events: the creation of new journals and departments, significant innovations, or new technologies. However, at the heart of things is a social process involving interactions among individual scientists, deciding who to collaborate with and on what topic. The nature of these interactions and their short-term consequences on scientific inquiry have been studied in some detail, as has the longer-term evolution of scientific disciplines throughout history. Yet this leaves unanswered questions about how the interactions among those scientists give rise to broad, long-term trends in the evolution of science. To bring together these two areas of research, we provide a new model in which the dynamics of scientific collaboration affects the evolution of scientific fields. We build off of Sun et al. (2013)’s model, in which scientists choose collaborators based on whether they have collaborated in the past, while papers and scientists accumulate discipline associations based on these collaborations. Ultimately, new scientific fields emerge as sets of scientists cluster together, or merge as previously distinct disciplines start to blend together. While their model captures many features of how scientific fields have evolved, key aspects of the short-term interactions among scientists are not incorporated, leading to unexplored aspects of the evolution of disciplines. In particular, publications have different potential impacts depending on various factors, e.g. the reputation of the scientists. Incorporating this aspect of scientific work allows us to explore two broad historical trends in terms of the social interactions among scientists which underpin them. First, new scientific fields are often spearheaded by a few prominent scientists. While we may explain this with reference to works of genius or larger-than-life personalities, it can also be explained with reference to the dynamics of collaboration and credit accumulation. If the impact of previous work (i.e. the credit accumulated for it) affects future productivity and number of collaborations, as well as the impact of future work (Peterson et al. 2014), new disciplines may emerge around key figures regardless of quality of work or personality. Further, social positioning may be a better predictor of ability to found new disciplines than a scientist’s personal characteristics. Second, there seems to be a ‘contagion of disrespect’, whereby research in subfields associated with marginalized groups are increasingly dismissed as unimportant to the production of scientific knowledge (Schneider et al. forthcoming). While biased evaluation of work surely plays a role in this, collaboration dynamics are also likely part of the story. There is often unequal division of credit within collaborations, where members of marginalized groups receive less recognition for their contributions compared to members of a dominant group. This can affect both future credit accumulation and likelihood to collaborate across social identity lines (Rubin and O’Connor 2018). If collaborations become increasingly clustered according to social identity, while results from particular social identity groups generate less credit, this gives rise to a contagion of disrespect. Petersen, A.M., Fortunato, S., Pan, R.K., Kaski, K., Penner, O., Rungi, A., Riccaboni, M., Stanley, H.E. and Pammolli, F. (2014). Reputation and Impact in Academic Careers. Proceedings of the National Academy of Sciences, 111(43), 15316-15321. Rubin, H., & O’Connor, C. (2018). Discrimination and Collaboration in Science. Philosophy of Science, 85(3), 380-402. Schneider, M.D., Rubin, H, & O’Connor, C. (forthcoming). Promoting Diverse Collaborations. The Dynamics of Science: Computational Frontiers in History and Philosophy of Science, eds. G. Ramsey & A. De Block Sun, X., Kaur, J., Milojević, S., Flammini, A., & Menczer, F. (2013). Social Dynamics of Science. Scientific Reports, 3(1), 1-6. On the Stability of Racial Capitalism 09:00AM - 11:45AM
Presented by :
Liam Kofi Bright, Presenter, London School Of Economics
Cailin O'Connor, Presenter, UC Irvine What is the connection between capitalism and racial hierarchy? In line with the theoretical tradition known as “the theory of racial capitalism” we show that the latter can functionally support the former. As a social construction, race has just those features which allow it to facilitate stable, inequitable distributions of resources. We support this claim using techniques from evolutionary game theory and the theory of cultural evolution. The theory of racial capitalism proposes an origin story for how the global economy came to be racially stratified and (in the main) organized along capitalist lines. The proposal is that the very same events led to both - Europe was already organizing its workforces along proto-racial lines at about the time it was spreading its economic form through colonialism. As such, European expansion ended up simultaneously bringing capitalism and racial organization in its wake. However, many scholars make a stronger claim than noting the mere historical contingency that racism and capitalism co-occurred. Many argue that this coincidence is functional: the development of racial hierarchy helped the capitalist social form survive and perpetuate itself. This is because capitalism will inevitably generate an unequal distribution of control over factors of production and division of the resulting social surplus. Some means of explaining, justifying, and continuing this rampant and easily observed inequality is required, and, in particular, one that allows elites to retain their place. Race and racialism, by being easily observable, hard to change, and passed down across generations, worked nicely. But why do these features of race work to stabilize capitalist systems? Using modelling techniques from evolutionary game theory, and drawing on some previous results, we show how oppressive schemes employing race are especially well-suited for underpinning stable and highly unequal systems of dividing labor and reward. We argue that these models provide a functional explanation for the co-occurrence of race and capitalism that vindicates arguments from racial capitalist theory. We describe in detail several different models intended to illuminate the functional role that various aspects of race play in capitalist systems. We start with the fact that race is hard to change or imitate. I.e., it is fairly inflexible. We then discuss the fact that race is often fairly easy to identify compared to alternative tags or markers. And last we discuss the heritability of race. In each case we show how these features underpin systems of inequality. In models without these features, inequitable systems are unlikely to emerge. In models with them, inequality is stabilized. We also show that if powerful groups were to select some categorical system to ground inequality, they benefit themselves by picking race for this reason. We conclude by discussing the normative political consequences of this relationship. Gender and the time cost of peer review 09:00AM - 11:45AM
Presented by :
Erin Hengel, Presenter, London School Of Economics In this paper, we investigate one factor that can directly contribute to—as well as indirectly shed light on the other causes of—the gender gap in academic publications: time spent in peer review. To study our problem, we link administrative data from an economics field journal with bibliographic and demographic information on the articles and authors it publishes. Our results suggest that in each round of review, referees spend 4.4 more days reviewing female-authored papers and female authors spend 12.3 more days revising their manuscripts. However, both gender gaps decline—and eventually disappear—as the same referee reviews more papers. This pattern suggests novice referees initially statistically discriminate against female authors; as their information about and confidence in the refereeing process improves, however, the gender gaps fall. Better than Best: Epistemic Landscapes and Diversity of Practice in Science 09:00AM - 11:45AM
Presented by :
Jingyi Wu, University Of California, Irvine When solving a complex problem in a group, should we always choose the best available solution? In this paper, I build simulation models to show that, surprisingly, a group of agents who randomly follow a better available solution than their own can end up outperforming a group of agents who follow the best available solution. The reason for this relates to the concept of transient diversity in science (Zollman 2010). In my models, the “better” strategy preserves a diversity of practice for some time, so agents can sufficiently try out a range of solutions before settling down. The “best” strategy, in contrast, may lock the group in a suboptimal position that prevents further exploration. In a slogan, “better” beats “best.” My models are adapted from Lazer and Friedman (2007)’s model where a network of agents is tasked to solve an NK landscape problem. Here, agents search in a solution space with multiple “peaks.” They only have knowledge of their neighbor’s solutions, as well as (sometimes) the results of limited local exploration, so they may fail to ever discover the global optimal solution(s). The NK landscape model can be fruitfully applied to cultural innovation and problem solving, especially to complex problems where optimal solutions are not readily accessible from all starting points. Besides, NK landscape models are more general and realistic than other epistemic landscape models (e.g. Weisberg and Muldoon (2009)), due to their ability to represent multi-dimensional and interconnected solutions (Alexander et al. 2015). My result of “better” beating “best” has several implications in social epistemology. First, this is another instance of the Independence Thesis, which states that individual and group decision-making can come apart (Mayo-Wilson et al. 2011). In my models, every round, an agent’s epistemic gain when they follow the “better” strategy is no greater than when they follow the “best” strategy, yet, they have greater long-term gain in a social setting. Second, Zollman (2007, 2010) and Lazer and Friedman (2007) previously showed that a less connected community is more likely to arrive at superior beliefs or solutions, due to the transient diversity present. But limiting connectivity for the gain of diversity of practice may be too costly or impractical (Rosenstock et al. 2015). My result suggests that we can achieve comparable benefits if instead people choose “better.” Indeed, a completely connected group that follows the “better” strategy can outperform a very sparsely connected group that follows the “best” strategy. Finally, insofar as some approaches to a problem are associated with particular social groups (Longino 1990; Fehr 2011), the “better” strategy also makes it more likely to preserve solutions arising from marginalized perspectives. These solutions may not be the most optimal at a given time, perhaps due to a historical lack of resources, but may nevertheless become promising after further explorations. Alexander, J. M., Himmelreich, J., and Thompson, C. (2015). Epistemic Landscapes, Optimal Search, and The Division of Cognitive Labor. Philosophy of Science, 82(3):424–453. Fehr, C. (2011). What is in It for Me? The Benefits of Diversity in Scientific Communities. In Feminist Epistemology And Philosophy Of Science, pages 133– 155. Springer. Lazer, D. and Friedman, A. (2007). The Network Structure of Exploration and Exploitation. Administrative Science Quarterly, 52(4):667–694. Longino, H. E. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton University Press. Mayo-Wilson, C., Zollman, K. J., & Danks, D. (2011). The Independence Thesis: When Individual and Social Epistemology Diverge. Philosophy of Science, 78(4), 653-677. Rosenstock, S., Bruner, J., & O’Connor, C. (2017). In Epistemic Networks, is Less Really More?. Philosophy of Science, 84(2), 234-252. Weisberg, M. and Muldoon, R. (2009). Epistemic Landscapes and the Division of Cognitive Labor. Philosophy of Science, 76(2):225–252. Zollman, K. J. (2007). The Communication Structure of Epistemic Communities. Philosophy of Science, 74(5):574–587. Zollman, K. J. (2010). The Epistemic Benefit of Transient Diversity. Erkenntnis, 72(1):17. | ||
09:00AM - 11:45AM Smithfield | Symposium on Climate Risks: Impact, Adaptation, and Vulnerability Speakers
Michael Mann, Penn State University
Marina Baldissera Pacchetti, Research Fellow, University Of Leeds
Karen Kovaka, University Of California, San Diego
Kate Nicole Hoffman, University Of Pennsylvania
Michael Weisberg, University Of Pennsylvania
Moderators
Casey Helgeson, Penn State University This symposium focuses on a constellation of issues concerning climate risks and responses to these risks. In the climate policy literature, this is often referred to as adaptation or climate change impacts, vulnerability, and adaptation. The last decades have seen not only substantial improvements in our ability to measure past climatic change and the anthropogenic forcing that caused it, but also to predict future changes. This has increased scientists' ability to attribute current and future extreme weather events to climate change. In light of this ability, and the growing recognition that even optimistic mitigation scenarios will still result in substantial, hazardous changes to the environment, scientific and political attention has focused on climate risk reduction and adaptation alongside mitigation. The papers in this session, a first to focus on adaptation at PSA, will cover key issues including how best to attribute extreme events to climate change, how best to characterize uncertainty in the context of climate adaptation, what norms or constraints should guide judgments about good adaptation actions, and how adaptation progress could be measured at a global level. What is the Global Goal on Adaptation and How Should We Measure It? 09:00AM - 11:45AM
Presented by :
Michael Weisberg, University Of Pennsylvania Carbon emissions must be halved over the next decade to hold the global average temperature increase to the range in the Paris Agreement, “recognizing that this would significantly reduce the risks and impacts of climate change.” But even this bold mitigation will not eliminate the risks humans face from climate change. In the coming years we expect to see increased climate change disruptions related to rising sea levels, extreme droughts, intensified tropical cyclones, and terrestrial and marine heat waves. These climate change hazards interact with increasing vulnerability and exposure of human societies, compounding risks in complex ways over time. Because of this, the Paris Agreement also established “the global goal on adaptation of enhancing adaptive capacity, strengthening resilience and reducing vulnerability to climate change, with a view to contributing to sustainable development and ensuring an adequate adaptation response in the context of the temperature goal.” But what exactly does this goal consist of? It isn’t specified and is an active topic of debate and negotiation. While there have been substantial advances in attributing extreme weather to climate change and in understanding risk dynamics, it remains difficult to specify concrete goals for reducing these climate risks. In contrast, the Intergovernmental Panel on Climate Change (IPCC) Special Report on Global Warming of 1.5°C and working group III’s contribution to the Sixth Assessment Report provided four global mitigation pathways detailing how many gigatons of CO2 emissions needed to be eliminated and the timelines for doing so to avert dangerous anthropogenic climate change. But nothing like this exists in relation to climate change adaptation, neither in terms of precise adaptation goals nor in terms of prospective scenarios to minimize and address adverse climate impacts at the global scale, hence many unresolved questions. This paper is an exploration of the state of adaptation science, understand from the global perspective, and develops a new formation of the global goal on adaptation which is tied to the burning embers and sustainable development frameworks. This formulation of the goal is one that, if followed, will assure sustainable development remains obtainable and is measurable, providing guidance to countries trying to make policies to reduce climate risk. Climate Adaptation and Privileging the “Natural” 09:00AM - 11:45AM
Presented by :
Kate Nicole Hoffman, University Of Pennsylvania
Karen Kovaka, University Of California, San Diego Two ideas that run through much of Western environmentalist thought are (1) nature is that which is untouched by humans, and (2) intervening in nature is generally bad, morally and epistemically. These ideas continue to be quite influential in environmental conservation. They define what successful outcomes look like, and what strategies are allowable for promoting these outcomes. An important contribution of environmental philosophy and philosophy of science has been to question these ideas about nature and intervening in nature. There is, for example, a rich tradition of challenging the human/nature conceptual dichotomy (e.g. Callicott 2000) and a growing literature that critiques excessive caution about conservation interventions (e.g. Brister et al. 2021). Our paper accepts these critiques and takes them as a starting point. But where the existing literature focuses almost exclusively on environmental conservation, we examine policy and discourse around climate change adaptation. We show that suspicion about “hi-tech” interventions that are perceived as unnatural, such as marine cloud brightening and genetic engineering, is common in adaptation discussions and decision-making (e.g. Van Haperen et al. 2012). At the same time, there is often uncritical acceptance of nature-based solutions (the use of natural features and processes to address environmental challenges) (Holl and Brancalion 2020). This bias against the technological and in favor of what is perceived as natural is familiar, and it has unfortunate consequences. It prejudices planners, managers, and the public against promising adaptation policies and unnecessarily pits so-called “green” versus “gray” solutions against one another, when in fact combining the two is likely crucial for effective adaptation (Seddon et al. 2020). What alternative heuristics might make better guides for decision-making around climate change adaptation? We explore this question in the rest of the paper and introduce a framework for communicating about and evaluating adaptation strategies that emphasizes three key insights from the adaptation literature (e.g. Browder et al. 2019). First, particular adaptation solutions only make sense as elements of larger adaptation packages, which need to deliver benefits over multiple time-scales. Second, the novelty or technical complexity of any given adaptation solution is much less important than its testability. Third, ongoing monitoring of adaptation solutions is critical, both for winning public support and addressing knowledge gaps. Beyond uncertainty quantification: new challenges in the epistemology of climate change adaptation 09:00AM - 11:45AM
Presented by :
Marina Baldissera Pacchetti, Research Fellow, University Of Leeds Anthropogenic climate change (CC) poses a serious global threat, and human responses to this problem are usually framed in terms of mitigation (the reduction of human actions that contribute to climate change) and adaptation (the response to actual or expected impacts of changes in the climate with the aim of reducing vulnerabilities and enhancing opportunities). So far, philosophers interested in climate change science have mostly focused on the epistemic and value theoretic issues in climate modelling (Katzav and Parker 2018), independently on how information flows from experts (climate scientists) to lay people (the public, policy makers, etc.), and the decisions-types that take climate change information into account. One promising way in which philosophers have analysed the dependency between the epistemology of climate science and the purpose it serves in the so-called “adequacy-for-purpose” view (Parker 2009, 2020; Baumberger et al. 2017), a pragmatic approach to model evaluation. However, this approach only focuses on physical climate modelling, possibly leaving out some important issues that arise in generating decision-relevant information in the context of climate change adaptation. Here, we take a different approach. We argue that while mitigation and adaptation are both important responses to CC, supporting mitigation and adaptation actions requires a different type of information, that incurs in different epistemic and value theoretical problems. Information derived from climate change science for mitigation action requires information at the global scale, but information for adaptation requires climatic information at local spatial scales and long temporal scales. If this information need is taken to involve only considerations about climate models (as is mostly done for deriving information to support mitigation), the high degrees of uncertainty tied to information for adaptation can hinder effective adaptation decision making (Dessai et al. 2009). These considerations have prompted physical climate scientists and environmental social scientists to develop new approaches to developing information for adaptation. We first highlight some epistemic issues that have been raised in approaching developing information for CC adaptation in the same way as CC mitigation (e.g. Dessai and Hulme 2004, Wilby and Dessai, 2010). We then show that producing CC information requires taking elements that go beyond the physics of climate change into account, such as local vulnerabilities and attitudes towards risk of decision makers (Adger et al 2009, Shepherd and Lloyd 2021). Finally, we illustrate a new methodology that addresses this need (Goulart et al. 2021, Ciullo et al. 2021) and identify what novel epistemic and value theoretic issues this methodology entails. On the Attribution of Extreme Weather Events to Climate Change 09:00AM - 11:45AM
Presented by :
Michael Mann, Penn State University Climate change policy—including matters involving both mitigation and adaptation—is informed by assessments of the risks and damages caused by climate change. Such assessments, in turn, depend on our ability to attribute specific impacts to climate change. Of particular interest is the exacerbating effect climate change is having on damaging, costly and potentially deadly extreme weather events, including heat waves, droughts, floods, storms and tornado outbreaks. A vigorous debate has arisen among researchers regarding best practices for attributing such events to climate change. This debate hinges on issues that are both scientific and philosophical in nature. Among the complicating considerations are (a) the differing levels of confidence in “thermodynamic” (direct effects of warming) and “dynamical” (indirect effects related to changes in atmospheric circulation and stability) influences, (b) the relative merit of “storyline” vs probability-based approaches, and in the latter case (c) alternative preferences for frequentist vs. Bayesian approaches to statistical inference. Last, but not least, is (d) the limitations of climate model-based attribution approaches in capturing subtle, real-world linkages between climate change and extreme weather events that are not well resolved in current generation climate models. I will review the current state of play in this debate, discussing some of my own research contributions in these areas. | ||
09:00AM - 11:45AM Board Room | Revisiting Morgan’s Canon: Gradualism, Anthropomorphism, and Non-Human Cognition Speakers
Bendik Hellem Aaby, KU Leuven
Grant Ramsey, KU Leuven
Charles Beasley, London School Of Economics
Rachael Brown, Senior Lecturer, Australian National University
Marta Halina, University Of Cambridge
Moderators
Adrian Currie, University Of Exeter Morgan's canon is one of the most influential methodological principles in comparative psychology. It states that one should always abstain from explaining animal behavior with reference to any "higher" psychological faculties than absolutely necessary. In the contemporary literature, this principle is interpreted in terms of simplicity or parsimony. However, its original intent was not to advocate for simple explanations, but to avoid them. It was meant as an antidote to the anthropomorphic explanations of anecdotal cognitivism. The simplest explanation of why a dog avoids eye contact after you caught it chewing your shoe would be, according to Morgan, that it "knew that he shouldn't have done so and when caught felt ashamed." Morgan insisted that we should instead seek more complicated explanations that do not invoke human psychological characteristics. In this symposium, we revisit Morgan's Canon and the related debates over anthropomorphism and anthropodenial. The talks grapple with the mental continuity thesis and the question of gradualism in accounts of the evolution of mind. They consider how best to understand the notion of parsimony at play in Morgan's Cannon, what it is to explain or understand minds in non-humans, and what role folk psychology should play in such explanations. Gradualism as a Constraint on Theorising in Comparative and Evolutionary Psychology 09:00AM - 11:45AM
Presented by :
Rachael Brown, Senior Lecturer, Australian National University In From Signal to Symbol (2021, p. x) Ron Planer and Kim Sterelny argue that any “adequate” theory of language evolution “must identify a plausible trajectory from great-apelike communicative abilities to those of modern humans where each step along the way is small, cumulative and adaptive (or at least not maladaptive: there might be some role for drift)”. They are not alone in invoking such a constraint. Gradualism is cited as an important assumption amongst those concerned with the evolution of cognition and the nature of animal minds going back to Darwin’s mental continuity thesis (Darwin 1871 [2013]). Gradualism is often invoked by scholars in pushing back against the anthropocentric allure of human uniqueness, the idea being that the postulation of the evolution of entirely novel cognitive capacities in our lineage alone is evolutionarily implausible. Indeed, Planer and Sterelny call the capacities such theories postulate “miracles” (p. 213). In this paper, I explore the evolutionary justification for such claims in comparative and evolutionary psychology in light of work on gradualism in evolutionary developmental biology (evo-devo). I ask, are evolutionary trajectories made up of “small, cumulative and adaptive” steps indeed more evolutionarily plausible than those that postulate entirely novel cognitive capacities within lineages? If so, why? One reason one might question the gradualist assumption (or at least suggest it needs to be applied with more care) comes from evidence that, although change at the genetic level is typically gradual, gradual genetic evolution is not always associated with gradual phenotypic evolution (Moczek 2008). As understanding of the relationship between genes and phenotypes in development has grown, so too has an appreciation of the important role played by neutral evolution and other processes in the evolution. At least from the perspective of evo-devo, these developmental processes undermine any bald gradualist assumption based in the gradualism of micro-evolution — even if genetic evolution is gradual, one cannot assume that phenotypic evolution will be. Another justification for gradualism lies in the randomness of variation. It is much more likely for large random phenotypic changes to be deleterious than small ones. Given this, we expect that most large phenotypic shifts will fail to persist and propagate in populations (Calcott 2011). Again, here, work in evo-devo on plasticity and other mechanisms of adaptation suggests that there are ways that developmental systems have evolved to make large adaptive shifts in phenotype possible (Moczek 2008) and undermines any bald gradualist assumption. This article explores these and other justifications for a gradualist assumption in comparative and evolutionary psychology. Ultimately, I offer a novel account of gradualism as a constraint on theorising in comparative and evolutionary psychology which better reflects contemporary evolutionary developmental biology. References Calcott, B. 2011. Wimsatt and the Robustness Family: Review of Wimsatt’s Re-engineering Philosophy for Limited Beings. Biology and Philosophy. 26:281-293. https://doi.org/10.1007/s10539-010-9202-x Darwin, C. 1871. The Descent of Man and Selection in Relation to Sex. Wordsworth Editions Limited (2013), Hertfordshire, UK. Moczek, A. P. 2008. On the Origins of Novelty in Development and Evolution. BioEssays. 30:432-447. https://doi.org/10.1002/bies.20754 Planer, R. J. and Sterelny, K. 2021. From Signal to Symbol: The Evolution of Language. The MIT Press. The Role of Folk Psychology in Scientific Understanding 09:00AM - 11:45AM
Presented by :
Marta Halina, University Of Cambridge The scientific status of folk psychology (FP) is a topic of ongoing debate (Hochstein 2017). One common criticism of the use of FP in the sciences is that FP accounts produce feelings of understanding when in fact they are poor guides to truth. For example, in the context of comparative psychology, Papineau and Heyes (2006) observe that it is “easy, perhaps irresistible” to interpret some experimental results in terms of FP (p. 188). Penn (2011) agrees, writing “there is no doubt, of course, that folk psychological explanations are ‘simpler for us’ to understand” (p. 259); however, “the job of comparative cognitive psychology was supposed to be to open up the black box of animal minds to functional and algorithmic specification—not simply reiterate the kinds of explanations the ‘folk’ use” (p. 259). Following Carl Hempel, many philosophers of science would agree that subjective feelings of understanding are poor guides to good explanation. According to this view, such feelings are at best epistemically irrelevant and at worse misleading (Trout 2002). Combined with the idea that FP is no more than a collection of platitudes about mental states and behaviour, the situation appears deeply problematic: platitudes lead to feelings of understanding, but such feelings are doing little more than tracking our common-sense knowledge, rather than providing insights into the workings of the mind. However, there are alternative ways of characterising both FP and the role of understanding in the sciences. First, FP has been described as a model, rather than a collection of platitudes (Godfrey-Smith 2005). Second, some argue that understanding is necessary for explanation (De Regt 2017). Under this latter view, understanding is not a mere ‘aha’ feeling but rather concerns the skills and judgments scientists employ when constructing explanations (what De Regt calls “pragmatic understanding”). Moreover, to explain a phenomenon is to fit it into a theoretical framework and models are crucial mediators in this process. Applying these ideas, we can construct an alternative account of FP: it facilitates pragmatic understanding in the construction of explanations in psychology. In this paper, I advance and defend this account of the role of folk psychology in scientific practice. References De Regt, H.W. 2017. Understanding Scientific Understanding. Oxford University Press. Godfrey-Smith, P. 2005. Folk Psychology as a Model. Philosophers’ Imprint. 5(6):1-16. Hochstein, E. 2017. When does ‘Folk Psychology’ Count as Folk Psychological? The British Journal for the Philosophy of Science. 68(4):1125-1147. Papineau, D., and Heyes, C. 2006. Rational or Associative? Imitation in Japanese Quail. In S. Hurley & M. Nudds (eds.), Rational Animals? Oxford University Press, pp. 187-195. Penn, D. 2011. How Folk Psychology Ruined Comparative Psychology: And How Scrub Jays Can Save It. In R. Menzel and J. Fischer (eds.), Animal Thinking: Contemporary Issues in Comparative Cognition. MIT Press, pp. 253-266. Trout, J. D. 2002. Scientific Explanation and The Sense of Understanding. Philosophy of Science. 69(2):212-233. Adjudicating and Interpretating Morgan’s Canons 09:00AM - 11:45AM
Presented by :
Charles Beasley, London School Of Economics Morgan’s original canon (1894) was intended as a prophylactic against an anthropomorphic bias that he thought stemmed from the double inductive method, which explains seemingly identical behavior in animals and humans in terms of the same underlying causes. However, his defense of the method of variation, which introduces a bias of its own towards type-2 errors (Sober 2005), as well as the original canon’s reliance on vague terms like ‘higher’ and ‘lower’ psychical faculties, has been repeatedly reinterpreted and contested throughout the history of comparative psychology (e.g., Karin-D’Arcy 2005; Allen-Hermanson 2005; Meketa 2014; Heyes 2012; Fitzpatrick 2008). Despite these perennial disputes, Morgan’s canon has played a preeminent role in the methodology of comparative psychology, even if it is unclear what it precisely means or how exactly it should be used to direct research. This has frequently led to uncritical invocations of the canon to settle the task of theory selection when it is unsuited to do so. This talk is directed at ameliorating some systematic problems that pervades comparative psychology's use of the canon. I do this by developing a quantitative parsimony interpretation of Morgan’s canon, while simultaneously laying out the conditions for interpreting the evidential strength of individual invocations and showing how they can be evaluated against one another. On this interpretation, a theory or model is more parsimonious than another, if the fact that there is less of a local relevant feature gives us reason to prefer it. A theory or model establishes its relevant simplicity against a context to which it is bound, where being bound to a context implies being committed to counting in a certain way (Sober 1994). This implies that are multiple ways in which being simple in a certain way is relevant both within and across contexts (Okasha 2011, Kuhn 1962). For example, as Dacey (2016) highlights, the target of parsimony claims and invocations of Morgan’s canon include processes, energetic demands, structures, and inputs. Given that such claims are local and multiple, I offer a way of evaluating incompatible claims based on their justificatory strength and their degree of underdetermination. Doing so offers a principled way to direct further research that can account for both empirical as well as extra-empirical concerns. On this interpretation, the original spirit of Morgan’s canon can be preserved in so far as invocation of it employs a more fine-grained interpretation of both the purported simplicity claim underlying it and the demands of the mental continuity thesis that gave rise to it. Moreover, this interpretation foregoes the pitfalls of so-called ‘default’ reasoning that are pervasive in the discipline, while addressing the problems of both anthropomorphism, anthropodenialism, as well as uncritical invocations of the mental continuity thesis. References Allen-Hermanson, S. .2005. Morgan’s Canon Revisited. Philosophy of Science. 72(4):608-631. Dacey, M. 2016. The Varieties of Parsimony in Psychology. Mind & Language. 31(4):414-437. Fitzpatrick, S. 2008. Doing Away with Morgan’s Canon. Mind & Language. 23(2):224-246. Karin-D’Arcy, M. 2005. The Modern Role of Morgan’s Canon in Comparative Psychology. International Journal of Comparative Psychology. 18(3). Kuhn, T. S. [1962] 2012. The Structure of Scientific Revolutions: 50th Anniversary Edition. University of Chicago Press, Chicago, Il. Heyes, C. 2012. New Thinking: The Evolution of Human Cognition. Philosophical Transactions of the Royal Society B: Biological Sciences. 367(1599):2091-2096. Lloyd Morgan, C. 1894. An Introduction to Comparative Psychology. The Walter Scott Publishing Co, Ltd. London and Newcastle-on-Tyne. Meketa, I. 2014. A Critique of the Principle of Cognitive Simplicity in Comparative Cognition. Biology and Philosophy. 29(5):731-745. Okasha, S. 2011. Theory Choice and Social Choice: Kuhn versus Arrow. Mind. 120(477):83- 115. Sober, E. 1994. From a Biological Point of View: Essays in Evolutionary Philosophy. Cambridge University Press, Cambridge, UK. Sober, E. (2005). Comparative Psychology meets Evolutionary Biology. In L. Datson and G. Mitman (eds.), Thinking with Animals: New Perspectives on Anthropomorphism. Columbia University Press, New York, NY, pp. 85-99. Justifying the Mental Continuity Thesis: Morgan’s Canon and Homology 09:00AM - 11:45AM
Presented by :
Bendik Hellem Aaby, KU Leuven The mental continuity thesis is an assumption shared by many philosophers and scientists who study mind in nature. It states that the difference in mind between different creatures is one of degree not kind. It is an attractive thesis, as it can serve as a basis for the application of evolutionary reasoning in studying other minds. The mental continuity thesis, however, is by no means self-evident. If it is to figure as a premise that warrants evolutionary approaches in the study of other minds, it needs to be substantiated. What reasons do we have for accepting that minds exist on a continuum? What are the relevant entities that secures the difference in degree and not kind? An uncritical admission of the mental continuity thesis has historically led to unfounded anthropomorphism (e.g., anecdotal cognitivism). Morgan’s canon, a methodological principle whose intention is to ward against such anthropomorphism, states that explanations of animal behaviors should never invoke complex cognitive processes or mechanisms unless there is compelling independent evidence for doing so. Morgan’s canon thus seems to have an uneasy relationship with the mental continuity thesis. As Morgan’s canon urges us to avoid invoking human-based complex cognitive processes in our explanations of behavior in non-human organisms, how are we to reconcile this with the view that the only difference between the human mind and that of other creatures is one of degree not kind? In other words, what counts as compelling evidence for invoking “human-like” cognitive process that highlight a difference in degree and not kind in our explanations of non-human behavior? We suggest that the concept of homology may be a way to justify the use of seemingly anthropomorphic language and explanation in conceptualizing non-human minds and behavior. However, the scope of this justification is limited. Homology can only play a role of justifying the mental continuity thesis within a restricted taxonomic scope. Specifically, we argue that there is a “goldilocks” zone in which homologies can be optimally used in the role of justifying claims concerning mental continuity across species. | ||
09:00AM - 11:45AM Benedum | Epistemic Iteration, Validation and Model Evaluation in Astrochemistry Speakers
Marie Gueguen, Organizer, Institute Of Physics, University Of Rennes 1
Amélie Godard, PhD Student, University Of Rennes 1
François Lique, Co-author, University Of Rennes 1
Nora Boyd, Assistant Professor, Siena College
George Smith, Professor , Tufts University
Moderators
Hasok Chang, University Of Cambridge Iterative testing is essential to exploring complex phenomena, especially in computationally intensive fields, where no analytical solutions or reliable observations can guide the models' development. Hence, its success cannot be justified by reference to a correct solution, but only by its capacity to self-correct despite initial imperfect ingredients. How can then scientists safeguards this autonomy while ensuring its productive interplay with observation, guaranteeing that models remain informed by and evaluated against new empirical findings? We analyse how insights from astrochemistry, an emerging field with steady influxes of new observations, whose models are developed iteratively, apply to other fields. Incremental Evidence and Least-Squares Curve-Fitting 09:00AM - 11:45AM
Presented by :
George Smith, Professor , Tufts University Two factors give rise to the need for an iterative approach to evidence, inexactness of measurement and complexity of the phenomena being represented. As Duhem emphasized, the first of these alone entails any number of alternatives at a comparable level of agreement with any one representation. Duhem, and van Fraassen after him, contend further that any representation of a regularity can never amount to much more than a curve-fit insofar as the evidence for it cannot exclude any number of alternative disparate representations at a comparable level of agreement. The reliance on least-squares curve-fitting in many iterative approaches to evidence – especially in representations of orbital motions – seems on the surface to support this view. The question addressed here will be the extent to which iterative approaches that rely on least-squares methods can nevertheless achieve evidence that markedly restricts the range of alternative representations. Duds and Discrepancies: Suggestive Astrophysical Anomalies as Sites of Epistemic Progress 09:00AM - 11:45AM
Presented by :
Nora Boyd, Assistant Professor, Siena College An important insight of scholarship on epistemic iteration is that scientists have to start from somewhere, but without sufficient justification to determine which ‘somewhere’. The engine of epistemic iteration is supposed to help us see how the epistemology of science gets along anyway. By adopting some working assumptions to get the whole process up and going, scientists afford themselves material to work with and to improve upon. The applicability of the adopted view can be tested until it breaks down and invites refinement or replacement. Through many iterations, the content of scientific theory thus earns its epistemic credentials. Indeed, thinking about epistemic iteration reveals the important roles of anomalies for epistemic progress. It is by probing the places where a theory seems to crack that scientists can often make headway. I will compare two cases in which scientists adopt a framework to work within, and then probe the failures of that framework in the hope of making epistemic progress. The first case involves astrophysical modeling of core collapse supernovae, and serves as a relatively clean example of how probing failures can lead to epistemic advances. Astrophysicists initially adopted one-dimensional (that is, spherically symmetric) computational models of supernovae, largely due to the limitations of computing power available at the time. These model supernovae would not evolve explosions—they were consistent duds. Thanks to some key empirical evidence demonstrating the asymmetry of debris (isotope titanium-44) ejected from a supernova (1987A), modelers realized that asymmetry (and thus, models in more than one dimension) is needed for successful explosions. This insight led to better and more accurate models of core-collapse supernovae. The second case concerns the abundance of primordial lithium in our universe—the so-called ‘the lithium problem’—and is less clean. Less clean in part because it is ongoing, but for that very reason a fertile testing ground for philosophy of science. The ‘lithium problem’ is the discrepancy between the abundance of lithium predicted by modeling big bang nucleosynthesis and that implied by empirical evidence. It arises from adopting the standard model of cosmology together with nuclear physics, and has been noted for decades. The discrepancy is not subtle—it amounts to something like a 4-5 sigma difference. Interestingly, the approach that many scientists have taken in attempt to resolve this discrepancy has been to connect it up to other outstanding problems in cosmology and physics including the Hubble parameter discrepancy, the nature of dark matter, and open questions about the lifetime of the neutron. Drawing on these two cases, I discuss the connections between epistemic iteration and scientific research strategies such as gathering new empirical evidence and attempting to unify anomalies. The source of uncertainties in astrochemical modelling 09:00AM - 11:45AM
Presented by :
François Lique, Co-author, University Of Rennes 1 Astrochemistry is “the study of the formation, destruction and excitation of molecules in astronomical environments and their influence on the structure, dynamics and evolution of astronomical objects” as stated by Alexander Dalgarno the pioneer of this field. Astrochemistry comprises observations, theory and experiments aimed at interpreting molecular emission patterns in space. It is, by definition, a multidisciplinary field of research where the sources of uncertainties are numerous. In this talk, I will present the scientific procedure in astrochemistry that goes from the calculations and /or the measurement of molecular data to the determination of the physical conditions lying in astrophysical media. For each step, I will detail the possible source of uncertainties and discuss the global impact of these uncertainties in the knowledge of the chemical complexity in space and of the formation of stars and planets. I will illustrate my presentation with recent examples demonstrating that reducing the uncertainties could lead to significant improvement of our knowledge of astrophysical media. Collisional excitation of interstellar molecules: methodology and uncertainties 09:00AM - 11:45AM
Presented by :
Amélie Godard, PhD Student, University Of Rennes 1 The astrophysical media are extremely (if not impossible) difficult to probe. Their chemical characterization can only be done through the analysis of the emission spectra registered by telescopes. Since the lines composing the spectra represent the emission energy between quantified and variously populated energy levels of a specific chemical species, it is often said that the emission spectra captured by telescopes contains the chemical signature of the media. Transition within these levels can occur by absorption/emission of a photon or through a collisional energy transfer. In order to model the spectra, the efficiency of these processes needs to be evaluated. In astrochemistry, one of the challenging goal is thus to compute accurate collisional data such as the excitation rate coefficients induced by the dominant astrophysical species (He, H, H 2 , e − ), from which astrophysicists can then deduce energy levels population and thus, derive the abundance of species and model the spectra. The problem, however, is that an exact quantum calculations of these rates is not reachable in terms of computational time and memory. Therefore, the obtention of realistic rate coefficients for unstudied molecular system requires the setting of a correct methodology, which will include multiple approximations from which several uncertainties of different nature will arise. The standard methodology for the obtention of rate coefficients proceed in two steps. First, a quantum chemistry method is used to span the interaction potential between the colliders and obtain the so-called potential energy surface (PES) of the system. Then, the dynamics of the nuclei are studied on the computed surface. Through the procedure, some validation can be done, in order to ensure the validity of the rates: the PES can be validated through the computation of spectroscopic data based on this surface that can be compared to experimental measurements. The collisional data from which rate coefficients arise can be also validated through the comparison between computed and measured pressure broadening coefficients. Such validation, however, implies that spectroscopic data are available for the molecular system under consideration, which is not always the case. This talk will start by a presentation of the methodology developed to obtain highly accurate rate coefficients. Then, its application to the CO 2 -He and C 2 S-He collisional system will be presented. The CO 2 -He system was well studied during the last decades, from both a theoretical and an experimental point of view, and can as such be used as a benchmark system for the validation of our methodology. Afterwards, I will show how I extrapolated this methodology to the C 2 S-He system, for which no rate coefficients exist. Indeed, the C 2 S molecule and its 13 CCS and C 13 CS isotopologues have been detected in interstellar clouds?. But the spectra lines of 13 C isotopologues have significant differences in their intensities when they should be similar given the current astrochemical models used by astronomers. Can we resolve this anomaly through the computation of accurate rate cofficients? The intensity difference could be explained away if the rate coefficients for both isotopologues are different by the same factor that the spectroscopic lines. If the rate coefficients do not explain this difference, does this mean that the chemical model used to determine the respective abundancies of these isotopologues somewhat incorrect? Extending the loop: model evaluation and discordances in astrochemistry 09:00AM - 11:45AM
Presented by :
Marie Gueguen, Organizer, Institute Of Physics, University Of Rennes 1 In this talk, I present different strategies for evaluating the adequacy of a given model in context of high uncertainties, all based on the idea of iterative coherence testing, i.e., a methodology that anchors the improvement of the models accuracy in an iterative process of solving the discrepancies between the model’s predictions and observations. Based on Le Qu´er´e (2006), we consider three different phases of the development of a model and how iterative conference testing, or “the closing the loop” methodology in G.Smith’s terms, can guide decisions for improving the model for each one of these phases. I argue that iterative testing serves a different purpose and is connected to a different evaluation strategy at each phase of the development of a model, from the adoption of a initial model known to be imperfect to a mature model ready to be confronted to observations. I will illustrate the merits and pitfalls of such an account of iterative testing and model evaluation based on the example of the photochemical models of Titan’s atmosphere developed over the last two decades. Models of Titan’s atmosphere have roughly followed two very different paths: either they have started with a complex chemical model but simplified astrophysical constraints, or with a complex description of the astrophysical conditions but a simplified chemistry. Both, however, failed to reproduce observations, although uncertainties attached to observations made the discordance not so worrying, until the observations of the Cassini-Huygens’ mission were released in 2007. The latter showed that the discrepancies were not dampened, but on the contrary worsen by improved observations: the uncertainty attached to the model were shown to be higher than those related to observations, rendering the comparison between observations and models impossible to interpret. This case has led to an impressive surge of creativity from the astrochemists, who have exploited this opportunity to develop innovative iterative tools designed to enrich and develop their photochemical network to the point where a discordance between their chemical models and the Cassini’s observations could finally be significantly interpreted. | ||
09:00AM - 11:45AM Sterlings 2 | The “what,” “where,” and “how” of memory: Scientific discoveries and philosophical implications Speakers
David Colaço, Presenter, LMU Munich
Michael Levin, Speaker, Tufts University
Sarah Robins, University Of Kansas
Antonella Tramacere, University Of Bologna
Moderators
Alison Springle, Assistant Professor , University Of Oklahoma Department Of Philosophy The epigenetic engram and a developmental view of memory: Toward an integration of mechanisms with phenomenology 09:00AM - 11:45AM
Presented by :
Antonella Tramacere, University Of Bologna The Mendel of Memory? Richard Semon and the Possibility of Engram Pluralism 09:00AM - 11:45AM
Presented by :
Sarah Robins, University Of Kansas Morphogenesis via pattern memories: from basal cognition to regenerative medicine 09:00AM - 11:45AM
Presented by :
Michael Levin, Speaker, Tufts University Where memory resides: Is there a rivalry between molecular and synaptic models of memory? 09:00AM - 11:45AM
Presented by :
David Colaço, Presenter, LMU Munich | ||
09:00AM - 11:45AM Fort Pitt | Global Dimensions of Epistemic Diversity Speakers
Rob Wilson, Symposiast, University Of Western Australia
David Ludwig, Wageningen University & Research
Daniel Hikuroa, University Of Auckland
Emily Parke, University Of Auckland
Zinhle Mncube, Department Of History And Philosophy Of Science, University Of Cambridge
Moderators
Alison Wylie, Presidential Address, Symposium Chair, University Of British Columbia This symposium brings together global collaborative networks to offer new perspectives on epistemic diversity, decolonization and science. We focus in particular on aspects of the relationships between science and Indigenous and local knowledge. The importance of Indigenous and local knowledge in humankind's understanding of the natural world is increasingly recognized in some academic areas, notably environmental and climate sciences. In other areas there have been recent heated debates about the relative epistemic value of Indigenous expertise and science. These issues have received little attention in philosophy of science. This symposium seeks to change that by bringing a range of perspectives on these debates and related issues to the PSA, as part of a broader discussion of epistemic decolonization and science. We investigate the relationships between Indigenous and local knowledge and science through the lenses of local case studies and cross-cultural conceptual analyses. We offer new philosophical perspectives on epistemic diversity and science and the integration of diverse knowledge systems. In the process we aim to clarify and constructively advance discussions which, in some settings, have become polarized and unproductive. The Biopsychosocial Model and the Integration of African Traditional Medicine and Modern Biomedicine 09:00AM - 11:45AM
Presented by :
Zinhle Mncube, Department Of History And Philosophy Of Science, University Of Cambridge My aim in this talk is to consider the increasingly contested topic of how to integrate Indigenous expertise and science as it applies to the case of integrating African Traditional Medicine (ATM) and modern biomedicine in South Africa’s healthcare system. One prevalent narrative against integration is that there is an unbridgeable gap between ATM and biomedicine. Some biomedical practitioners forcefully argue that ATM lacks the scientific evidence base to substantiate its practice. Nyika (2007) argues that ATM should be rejected on ethical grounds: ATM is not guided by Nuremberg code; it is impossible for patients to consent to the ‘unknown’ (i.e., scientifically untested herbal mixtures, or diagnoses and treatment with ‘mystical’ overtones); and ATM is paternalistic. Indeed, the history of the relationship between traditional healers and biomedical practitioners in South Africa is defined by mistrust and conflict. Proponents of a project of integration rely on at least three reasons for the need to find common ground between ATM and biomedicine. The first reason is practical. Studies suggest that up to 80% of Black South Africans consult traditional healers. For many of these people, traditional healers are often their primary source of healthcare before biomedical practitioners. Integration on this reason is to respond to the demand for ATM. The second reason is political. The claim is that recognizing the value of ATM is a matter of social justice and decolonising medicine. The third reason is epistemological – it is the recognition that ATM can contribute invaluable knowledge on disease and healing that modern biomedicine cannot. Building on this latter reason, in this talk, I show how ATM and modern biomedicine in South Africa can be integrated in through an instrumentalist biopsychosocial model of disease. The biomedical model of disease focuses primarily on the biomedical causes of disease prevalence, susceptibility, and presentation. The biopsychosocial model sees disease as a complex interaction of biomedical, social, and psychological factors. This model grounds medical practice in terms of a theory of the patient as a whole person. I argue that this instrumentalist biopsychosocial model accounts for both the prospects and limits of integrating ATM and modern biomedicine. Limits include, for example, reproducing existing hierarchies of knowledge by disregarding the elements of ATM that do not meet Western evidence-based standards of medicine. Asking Better Questions about Indigenous Knowledge and Science 09:00AM - 11:45AM
Presented by :
Emily Parke, University Of Auckland
Daniel Hikuroa, University Of Auckland The relationship between Indigenous knowledge and science is a topic of increasing global discussion, especially regarding climate and environmental sciences. A lot of this discussion has centred around comparing or contrasting the two on a range of counts, such as epistemic merit, methodological overlap, or worthiness of inclusion in science classrooms. These discussions are not always clear about the precise meanings of ‘science’ and ‘Indigenous knowledge’ (used here as shorthand for a family of related terms) at stake. In this talk we offer a framework for greater conceptual clarity and care in these discussions. We develop our framework through the lens of the relationship between mātauranga Māori (Māori knowledge, culture, values and worldview) and science (Hikuroa 2017, Mercier and Jackson 2019). A current high-profile and heated debate in Aotearoa/New Zealand centers on questions such as “Is mātauranga science?” or “(How) are mātauranga and science compatible?”. This local example provides a basis for a broader message about discussions of Indigenous knowledge and science taking place elsewhere in the world. People are talking past each other in debates about mātauranga Māori and science. This is partly due to the topic’s emotional and political entrenchment, but especially, we argue, due to pervasive ambiguities and equivocation on the meaning of ‘science’. Instead of discussing “mātauranga Māori versus science” in the abstract, these discussions should zoom in on more particular questions. A range of questions are presently run together—about the nature and limits of science, epistemic value, how to allocate research funding, proposed school curriculum changes, and other issues—to the detriment of constructively addressing any of them. As a basis for greater conceptual clarity and care in these discussions, we propose thinking of the family of claims at stake in terms of three variables: (1) mātauranga Māori, (2) science, and (3) the specific nature of the relationship being argued for or denied. Participants in these discussions should unambiguously fill in the blanks for each variable. We articulate a range of ways to do so, spanning epistemic, ontological, methodological, and socially or politically normative understandings. We discuss a range of examples from the literature on mātauranga Māori and science, and Indigenous knowledge and science more broadly, illustrating the landscape of ideas at stake in this discussion. Using the resulting framework as a basis, we urge future participants in these discussions to disambiguate any claims about “Indigenous knowledge and (or versus) science”, specify which questions they are asking and addressing, and exercise more conceptual clarity. Hikuroa, D. (2017). Mātauranga Māori: The ūkaipō of knowledge in New Zealand. Journal of the Royal Society of New Zealand, 47(1): 5–10. Mercier, O. & Jackson, A.-M. (eds.) (2019). Mātauranga and Science. New Zealand Science Review, two-part special issue: 75(4) and 76(1–2). From Demarcation to Transdisciplinarity: Why Indigenous Expertise Matters in Philosophy of Science 09:00AM - 11:45AM
Presented by :
David Ludwig, Wageningen University & Research Indigenous expertise has become increasingly recognized in a wide range of academic fields including agricultural sciences, ecology, public health, and sustainability studies (Chilisa 2019, Kimmerer 2013). The recognition of Indigenous expertise in academia interacts with a wider shift in the science system towards transdisciplinary and participatory methods that respond to complex social-environmental crises from climate change to food security to infectious diseases (Ludwig et al. 2022). Epistemologically, successful interventions into complex social-environmental systems require diversity of academic and non-academic expertise. Politically, such interventions raise questions about the structure of the science-policy interface and its impact on local livelihoods. As philosophers are increasingly turning their attention to the role of science in responding to social-environmental crises, Indigenous expertise is emerging as a crucial but often still peripheral topic in philosophy of science (Koskinen and Rolin 2019, Kendig 2020, Ludwig 2017, Mncube 2021). This article outlines a positive and critical perspective on the encounter of Indigenous expertise and philosophy of science. We argue that philosophy of science provides intellectual tools for contributing to critically reflexive transdisciplinarity that recognizes the plurality of expertise without neglecting the various - e.g. methodological, ontological, political - tensions between heterogeneous knowledge systems. At the same time, we caution that philosophy of science has also been shaped by demarcation debates that risk misrepresenting transdisciplinary negotiations by focusing on the division between science and non-science rather than fruitful exchange between academic and non-academic forms of expertise. We therefore argue that Indigenous expertise constitutes a core concern for philosophy of science and for understanding the complexity of interventions into social-environmental systems at the interface of science and policy. Chilisa, B. (2019). Indigenous research methodologies. Sage Publications. Kendig, C. (2020). Ontology and values anchor indigenous and grey nomenclatures. Studies in History and Philosophy of Biological and Biomedical Sciences, 84: 101340. Kimmerer, R. W. (2013). Braiding sweetgrass: Indigenous wisdom, scientific knowledge and the teachings of plants. Milkweed Editions. Koskinen, I., & Rolin, K. (2019). Scientific/intellectual movements remedying epistemic injustice: the case of indigenous studies. Philosophy of Science, 86(5): 1052-1063. Ludwig, D. (2017). Indigenous and scientific kinds. British Journal for the Philosophy of Science, 68(1): 187-212. Ludwig, D., Boogaard, B., Macnaghten, P., & Leeuwis, C. (2022). The politics of knowledge in inclusive development and innovation. In press. Mncube, Z. (2021). On local medical traditions. In Global Epistemologies and Philosophies of Science (pp. 231-242). Routledge. Kinship and Some Anthropological Turns 09:00AM - 11:45AM
Presented by :
Rob Wilson, Symposiast, University Of Western Australia Kinship was a central topic within anthropology during its first 100 years—roughly 1870-1970—before being made more peripheral to the discipline through several internal critiques. Foremost amongst these were critiques with a political edge to them by David Schneider, who articulated the view that the study of kinship had committed the near original anthropological sin of ethnocentric projection, and by feminist anthropologists, who saw that same tradition as reifying gendered categories (Bamford 2019). In this talk, I will re-explore some of this history but will do so with more recent turns or theoretical trends in cultural anthropology and the study of kinship in mind. In particular, I will discuss contemporary kinship studies via a consideration of (a) the “ontological turn” associated with anthropologists as diverse as Eduardo Kohn (2013, 2015), Marilyn Strathern (2020), Eduardo Viveiros de Castro (1998), and Morton Axel Pedersen (Holbraad and Pederson 2017); (b) the “decolonizing generation” (Allen and Jobson 2016; Jobson 2019) and its challenge to cultural anthropology’s racialized history; and (c) kinship beyond the human realm (Haraway 2008; Kirksey 2015, Clarke and Haraway 2018). I will give special attention here to the relationships between politics, metaphysics, and the human sciences. Allen, J. S. & Jobson, R. C. (2016). The decolonizing generation: (race and) theory in anthropology since the eighties. Current Anthropology 57(2): 129-148. Bamford, S. (2019). The Cambridge handbook of kinship. Cambridge University Press. Clarke, A. & Haraway, D. (eds.) (2018). Making kin, not population. Prickly Paradigm Press. Haraway, D. (2008). When species meet. University of Minnesota Press. Holbraad, M. & Pedersen, M. A. (2017). The ontological turn: an anthropological exposition. Cambridge University Press. Ingold, T. (2013). Anthropology beyond humanity. Soumen Antropologi / Journal of the Finnish Anthropology Society, 38(3): 15-23. Jobson, R. C. (2019). The case for letting anthropology burn: sociocultural anthropology in 2019. American Anthropologist, 122(2): 259-271. Kirskey, E. (2015). Species: a praxiological study. Journal of the Royal Anthropological Institute (N.S.) 21: 758-780. Kohn, E. (2015). Anthropology of ontologies. Annual Review of Anthropology 44: 311-27. Kohn, E. (2013). How forests think: toward an anthropology beyond the human. University of California Press. Strathern, M. (2020). Relations. an anthropological account. Duke University Press. Viveiros de Castro, E. (1998). Cosmological deixis and Amerindian perspectivism, Journal of Royal Anthropological Institute 4 (3): 469-88. | ||
09:00AM - 11:45AM Birmingham | Philosophical Perspectives on Cancer Biology and Medicine Speakers
Thomas Pradeu, Speaker, CNRS - University Of Bordeaux
Mael Lemoine, Co-author, CNRS - University Of Bordeaux
Anya Plutynski, Presenting Talk, Governing Board, Washington University In St. Louis
Benjamin Chin-Yee, Co-presenter, Department Of History And Philosophy Of Science, University Of Cambridge
Samir Okasha, Co-symposiast, Bristol
Lucie Laplane, Researcher, University Paris I Panthéon-Sorbonne
Moderators
Anne-Marie Gagné-Julien, Postdoctoral Fellow, McGill University Descriptive Summary: Participants in this symposium will bring diverse perspectives to philosophical issues arising out of cancer science and medicine. Speakers will discuss conceptual and epistemic issues arising in cancer research, such as how best to define cancer "drivers" and "actionable" mutations, whether and in what senses cancer is a process or product of multilevel selection, the clonal evolution model, and the role of comparative biology in cancer research. Should our approach to cancer not be anthropocentric? Lessons from comparative oncology 09:00AM - 11:45AM
Presented by :
Thomas Pradeu, Speaker, CNRS - University Of Bordeaux
Mael Lemoine, Co-author, CNRS - University Of Bordeaux Is cancer a natural kind? On the one hand, the question is whether we are right to split cancer into the categories we use. According to Plutynski (([2018]), cancer nosology yields “a multimodal and cross-cutting family of classificatory schemes” which seems to warrant “pluralist realism” about cancer. On the other hand, the question is whether our concept of ‘cancer’ simply lumps together facts according to human interests, which does not allow for useful generalizations. Cancer anthropocentrism is challenged by a rising approach in oncology, namely comparative oncology, which investigates cancer in all species (e.g., (Aktipis et al. [2015]; Schiffman and Breen [2015]; Albuquerque et al. [2018]). According to its proponents, a major advantage of this approach is that it frees us from human practical biases and yields better generalizations about cancer (Aktipis [2020]), indeed even leading to a “universal theory of cancer biology” (Dujon et al. [2021]). What can we hope to learn from the nonanthropocentric approach of comparative oncology? Although comparative oncology can challenge our category of cancer, it is itself fraught with the problem of which phenomena should count as cancer. Some researchers are very inclusive and take any tumor to be cancer, including in invertebrates and plants; others prefer to limit the category to what can invade other tissues and metastasize. Criteria and extension of cancer are clearly interdependent, which often gives the impression of a certain arbitrariness in comparative oncology (what people find is a direct reflection of the definition of cancer they started with). To break this circle, we argue that it is better to embrace a provisional and heuristic anthropocentrism, which begins with hypothetical generalizations about human cancers. Human cancers may be special, but they are the best-known cancers. Precise hypotheses relative to these generalizations can then be tested in other species. We propose in particular to identify “comparative paradoxes”, i.e., claims about cancer that should hold in various species, but actually do not hold. For instance, Peto’s paradox is the puzzle of why large organisms don’t get cancer more often than small ones. The implicit generalization is that disordered cells should be proportional to the number of cells, given that cancer is caused by the accumulation of random mutations in individual cells, but this is precisely what observation contradicts (Abegglen et al., 2015). Whenever this generalization does not hold and the explanation does not reside in the fact that some animals have evolved highly specific anticancer mechanisms, it has the potential to challenge our most general conceptions of cancer. In addition to Peto’s paradox, we will discuss two other paradoxes: the connection between cancer and longevity and the immunological control over cancer. Prominent authors in comparative oncology have argued that the main advantage of this approach is to reconceptualize cancer as a much broader phenomenon: cancer is best understood as a form of “cheating” (Aktipis et al 2015). Instead, we claim that its main advantage is to assess and revise our most entrenched convictions about how cancer works. Should cancer be viewed through the lens of social evolution theory? 09:00AM - 11:45AM
Presented by :
Samir Okasha, Co-symposiast, Bristol Cancer is often conceptualized in terms of selective conflict between cell and organism (Greaves 2015, Aktipis 2020). On this view, cancer involves a form of multi-level selection in which the cancerous cell phenotype is favored by selection at the cell level but opposed by selection at the organism level. Recently, Gardner (2015) and Shpak and Lu (2016) have argued that cancer is not a true case of multilevel selection, because cancer is an evolutionary dead-end. I argue that this “evolutionary dead-end” argument is powerful but not decisive. The clonal evolution model needs revision 09:00AM - 11:45AM
Presented by :
Lucie Laplane, Researcher, University Paris I Panthéon-Sorbonne
Alessandro Donada, Co-author, Institut Curie
Leila Perie, Co-author, Institut Curie Cancer cells keep accumulating alterations leading to a diversification through space and time. This diversity in the composition of cancer cells represents a major challenge for cancer treatment as it is difficult (if possible, at all) to find a treatment that works on all the cancer cells. The clonal evolution model introduces evolutionary tools in oncology to make sense of this evolution of the cancer cells. In this model cancer cells are regrouped in clones—populations of cells that share a common identity (traditionally a common set of driver mutations) inherited from a common ancestor cell. This allows us to reconstruct the evolutionary history of the tumour (and metastases), and to track its evolution through time. This has, for example, allowed researchers to identify mutations involved in resistance to targeted therapies (e.g., EGFRT790M induces resistance to first generation EGFR inhibitors). But reconstruction of the clonal evolution is far from an easy task, both pragmatically and conceptually. The conceptual issue can be easily grasped by just indicating that the two main characteristics of clones —genealogy and identity— pull in opposite directions. Genealogically speaking, all cancer cells have a common ancestor, the first transformed cell, so each cancer is one big clone. But cancer cells of a given cancer are all unique. Thus, regrouping cancer cells into clones requires making a choice with regards to which criterion to use. Traditionally, the choice is to regroup cells according to their driver mutations, as these are conceived as the only mutations that impact cells’ properties, and they are easily tractable including in clinics. We take issue with this choice. In this talk, we will first deconstruct the notion of clone in oncology, highlighting that it relies on the following dubious assumptions: (1) driver mutations can be distinguished from passengers; (2) driver mutations provide a good proxy of cancer cell phenotype; (3) intraclonal heterogeneity can be ignored. This will lead us to argue that the notion of clone must be revised. Second, we will argue in favour of a change in the understanding of clonal identity. Our suggestion is to regroup cells according to their similarities, distinguishing clonal (lineage-dependent) similarities, from non-clonal (lineage-independent) similarities (e.g., similarities that are stochastic or induced by phenotypic plasticity). Both types of similarities can contribute to explaining cancer cells properties, such as response to treatment. But only the former can contribute to clonal evolution as whatever causes the similarity is inherited by descendant cells of that lineage. Third, we will explore the benefit of this conceptual turn. It opens new research programs on how to analyse the evolutionary dynamics in cancer cells, experimentally and computationally. We will show the first results of an original experimental set-up we have developed to focus on the inheritance of functional properties, with no prior assumption regarding what exactly causes the observed lineage-dependent similarities. “Driver” Genes, “Actionable” Mutations, and the Scope and Limits of AI in Cancer Medicine 09:00AM - 11:45AM
Presented by :
Anya Plutynski, Presenting Talk, Governing Board, Washington University In St. Louis
Benjamin Chin-Yee, Co-presenter, Department Of History And Philosophy Of Science, University Of Cambridge Cancer researchers and clinicians speak of both cancer “drivers” and “actionable” mutations. In this paper, we explore how these two concepts are overlapping, and how they are different. Cases like the BCR-ABL1 gene fusion found in people with chronic myelogenous leukemia (CML) have served as exemplars in clinical teaching and research about the value of cancer genomics cancer diagnosis and treatment, but we argue that there are good reasons to think that the CML case is exceptional. With the completion of the cancer genome atlas project (TCGA) there is a growing realization that there are many more “drivers” than anticipated, placing an ever-larger wedge between the notion of “drivers” and “actionable” genes, in ways that have shifted the conversation about the relevance of cancer genomic data to diagnosis, prognosis and treatment. Clinicians now require a more fine-grained, contextual, and hierarchical ranking of significant variants for cancer diagnosis and treatment. We document here the shifts in the presuppositions driving the use of AI and genomic data in cancer diagnosis. We delineate the different ways that variants can be used in clinical activities and explain how this maps on to the distinction between "actionable" vs. "driver" mutations. For instance, the “driver” concept initially emerged in cases where molecular features of particular cancers were well-characterized, such as CML. In this case, a specific mutation provided important clinical information. However, the concept has since expanded to cover a broader set of genes found to be recurrently mutated in specific cancers using “Big Data” and AI approaches. Identification of “driver” mutations in this manner led to the splitting off of “driver” from the concept of “actionable” mutations. The latter refers to a subset of mutations which serve as biomarkers for particular treatments. While these concepts overlap in certain cancers, in others, it is crucial to keep them distinct. For instance, in molecularly heterogenous diseases, such as myelodysplastic syndromes (MDS) and acute myeloid leukemia (AML), it is very important to not conflate them. Although genetic risk stratification often guides treatment decisions, variants in such models are not “actionable” in the sense of being specific treatment targets. This debate over how to demarcate “drivers,” versus “actionable” mutations is tied to a larger debate about the proper role of AI in biomedicine. The use of AI to identify “driver” genes does a dual service: on the one hand, it provides at best correlative, predictive information; on the other, it also indicates a potential causal role. The concept of “actionable” mutations attempts to move beyond the correlative. In this way, the trajectory of cancer research aims to move from identifying “drivers” to distinguishing “actionable” mutations. AI approaches may tell us little as yet about the specific causal role they play, or whether we might expect to successfully intervene their downstream products or associated pathways, raising questions regarding the scope and limits of these methods in translational cancer research. | ||
10:00AM - 10:15AM Virtual Room | Coffee Break | ||
10:00AM - 11:45AM Forbes | Meet the Journal Editor - Philosophy of Science | ||
11:45AM - 01:15PM Virtual Room | Lunch (Interest Groups) Interest Group Lunch - Please note that these lunches are not subsidized by the PSA and do require prior registration to attend. Click to register.Open Access publication in philosophy of scienceHosts: David Teira, Zvi Biener (Phil-Sci Archive), Jon Fuller (Phil of Medicine), Sabina Leonelli (Open Science studies), Bryan Roberts (BSPS Open)Location: Vallozzi's PittsburghCapacity: 12 | ||
11:45AM - 01:15PM Kings 5 | Panel Discussion: Departmental Climate & Community There can be many contributing factors to climate toxicity in a departmental workplace, however, we intend to focus on constructive solutions to some systemic issues concerning how to build inclusiveness and equity to create a sense of belonging for all. Magda Bogacz, Assistant Professor of Leadership and Ethics, Air UniversityMatt Haber, University of Utah (from the department that reached and maintained an inclusive climate and gender parity)Sarah Roe, Associate Professor of History and Philosophy, Southern Connecticut State University (panelist and moderator)Alison McConwell, Assistant Professor of Philosophy, UMass-LowellChar Brecevic, Assistant Professor of Philosophy, Seattle UniversityJingyi Wu, Ph.D. Candidate, UC Irvine | ||
01:15PM - 03:15PM Sterlings 1 | Empirical Quantum Gravity? Speakers
Christian Wuthrich, Symposiast, University Of Geneva
Nick Huggett, University Of Illinois At Chicago
Mike Schneider, University Of Missouri
Leïla Haegel, University Of Paris
David Wallace, University Of Pittsburgh
Moderators
Mike Schneider, University Of Missouri Longstanding common lore in fundamental physics insists that research on the problem of developing a high-energy theory of quantum gravity (QG) is almost certainly a topic for the theoretician alone. Discriminating signatures of QG in data are just too difficult to come by, whether by means of experimentation (in the context of high energy physics) or of direct detection (in the context of astrophysics and cosmology). Philosophers of physics engaged with the problem of QG have typically endorsed this lore, focusing primarily on conceptual issues as have arisen in various theoretical approaches to the general research topic. Yet, counseling against the lore are several initiatives in recent decades on the empirical side of fundamental physics research, which have garnered considerable attention and enthusiasm in the wider physics community. And so, there is a lacuna within existing philosophical engagement with the problem of QG. The purpose of this symposium is to help fill the gap: four talks will be given about four different empirical strategies that have been proposed for getting significant empirical traction on the problem of QG, with philosophical reflection on how much one might justly expect to learn about the theoretical problem from each of them. Multimessenger astrophysics as a probe of quantum gravity phenomenology 01:15PM - 03:15PM
Presented by :
Leïla Haegel, University Of Paris While it is known that our current theories of quantum interactions and gravitation cannot describe the Planck scale, the effective scale at which the unification of forces into a quantum theory of gravity occurs is yet unknown. The attempts to build new theories are recently being complemented by an intense program aiming at deriving a phenomenology that could be probed by current or future observations. Several proposals, including brane/extra dimensional theories, alternative theories of gravitation such as Horava-Lifshitz theory and emergent theories, have been found to possibly provide low-energy signatures that could be probed with astrophysical messengers as described in Addazi et al. (2022). The exploration of the universe has recently entered a new era thanks to the multi-messenger paradigm, characterised by a continuous increase in the quantity and quality of experimental data that is obtained by the detection of the various cosmic messengers from numerous origins. Photons, neutrinos, cosmic rays and gravitational waves give us information about their sources in the universe and the properties of the intergalactic medium, but also opens up the possibility to search for phenomenological signatures of quantum gravity. On the one hand, the most energetic events allow us to test our physical theories at energy regimes which are not directly accessible in accelerators; on the other hand, tiny effects in the propagation of very high energy particles could be amplified by cosmological distances. Notably, several models imply that a unified theory could break fundamental symmetries such as CPT or Lorentz invariance. Such a break from core assumptions of our theories leads to specific predictions that can be observed with gravitational radiation, electromagnetic signals and neutrino oscillations. This talk will cover the current experimental bounds on those phenomena as well as the progress that can be expected in the next few years. The largest quantum gravity phenomenon 01:15PM - 03:15PM
Presented by :
Mike Schneider, University Of Missouri For as long as astrophysicists have considered the large-scale structure of the cosmos, discoveries on the subject have been taken to provide critical empirical insights relevant to theorizing in fundamental physics. This close connection between the two subjects is most familiar in the context of the relativistic hot 'Big Bang' model of the expanding universe that first rose to prominence in the 1930s, and which later developed into the standard Lambda-CDM model familiar today. In that hot 'Big Bang' model, the developmental origins of large-scale cosmic structure present today in our observable universe are understood in terms of high-energy fundamental physical processes far beyond our reach. And in Lambda-CDM, this role for fundamental physics beyond our reach is supplemented by additional roles, via theorizing about a dark sector to be incorporated into future physics. But through history, competitor accounts of the large-scale structure of the cosmos have likewise included empirical upshots that were to be taken as clues about new high-energy fundamental physics: witness the particle basis for the 'C-field' in Hoyle’s approach to Steady State cosmology in the 1950s, and even the earlier call in the 1930s to revise our fundamental understanding of classical gravitational dynamics within the context of Milne’s 'kinematic' model of the receding nebulae. In light of this consistent close connection between the empirical claims of large-scale cosmology and ongoing theorizing in fundamental physics, it is unsurprising that researchers interested in quantum gravity have often tried to mine the Lambda-CDM model for empirical clues that pertain specifically to their own domain of theorizing: looking for explicit cosmic imprints of quantum gravity. In this talk, after arguing in brief for the historical point above, I will then carefully pull apart (and critically assess) what I take to be two distinct recent programs of empirical QG research that fit the historical pattern. The first program treats the dynamics of Lambda-CDM as an autonomous structural theory of our present-day observable universe at the largest accessible scales (in terms of a nearby portion of the large-scale cosmos), to explore how detailed processes in quantum gravity might lead to 'trans-Planckian' physics in that same target system, which almost mimics familiar expectations about the large-scale cosmos derived from known fundamental physics. The second program treats Lambda-CDM, the model, as a low-energy effective description of a future model of quantum cosmology, so as to consider whether a sustained commitment to the descriptive accuracy of the former may ultimately be constraining on the topic of how to quantize gravity (i.e. in order to eventually construct the latter). I will conclude by noting a curious feature of the new 'trans-Planckian censorship' conjecture that has been advertised as a general principle governing possible low-energy physical descriptions in a universe where gravity is quantized: that the two empirical programs just pulled apart would seem, perhaps, to collapse into one --- albeit contingent on very particular speculations about what may come of our future understanding of the quantum nature of gravity within our present-day observable universe. Can Quantum Gravity be witnessed on the table top? 01:15PM - 03:15PM
Presented by :
Nick Huggett, University Of Illinois At Chicago It has long been thought that observing the effects of quantum gravity is effectively impossible, since gravity is so much weaker than other forces: consider, for instance the utterly insensible gravitational attraction of a magnet, compared with the very sensible magnetic force it exerts. But by drawing on ideas from 'quantum information theory' (QIT), and on recent experimental advances in quantum mechanics and in observing tiny gravitational fields, Bose et al (2017), and Marletto and Vedral (2017) have shown how, in principle, weak gravitational fields might have detectable quantum effects. This work has attracted a great deal of interest, as such 'BMV experiments' are tantalizingly close to current experimental physics; many experimentalists are working on the real possibility that the predicted effect could be measured in the next few years, even though the experiment would be one of the most delicate ever undertaken. But its significance would be a momentous advance in the study of quantum gravity. There are a number of conceptual issues to unpack regarding this proposal: First, in what sense would the BMV experiment count as an observation of the 'quantum nature' of gravity? First, there are theoretical issues: different, seemingly equally reasonable, theoretical commitments can lead to rather different understandings of the meaning and significance of the experiment. On the one hand, the BMV argument assumes that gravity should be modeled as a dynamical system. On the other, a gauge theoretic approach to quantizing gravity explains the effects in terms of a non-dynamical gauge constraint. Then, a further issue is the comparison with previous experiments which combined quantum and gravitational effects; what new information would we obtain from the BMV experiment, and in what senses would we gain greater practical control over quantum gravity? The philosophical issues thus concern the interpretation of physical theory, and the nature and role of experiment in science: both important topics within philosophy of physics and philosophy of science. Finally, moreover, QIT is in the first place a specific formulation of quantum mechanics, but it can involve further more specific assumptions. In the analysis of the BMV experiment in particular, one has to stipulate what it is for a system to be 'classical' rather than 'quantum'. Clearly this is an important, contentious, but undertheorized question in the philosophy of physics literature. This talk will outline the idea of the BMV experiment, and address the philosophical issues that it raises: (1) arguing that ultimately the *the* meaning of the 'quantum nature’ of a system is arbitrary to a certain extent; (2) explaining how the BMV experiment would both give an observation of the quantum nature of gravity in a deeper sense, and require more robust control over it, than previous experiments; (3) clarify the stakes in the assumptions of the QIT theorem. Beyond the limits of analogue experiments 01:15PM - 03:15PM
Presented by :
Christian Wuthrich, Symposiast, University Of Geneva Analogue experiments have attracted interest for their potential to shed light on inaccessible domains. In 1981, Unruh found a striking mathematical analogy between the propagation of light waves near a black hole and the propagation of sound in fluids. In fact, a number of distinct such 'analogue' systems can be found, from hydrodynamical systems to Bose-Einstein condensates. The remarkable discovery of an analogy between black holes and 'dumb holes' in fluids ('swallowing' sound) or Bose-Einstein condensates has spawned a rich literature exploring the emerging field of 'analogue gravity'. Moreover, it has led to an active experimental field of studying such analogue systems in labs, culminating in the observation of analogue Hawking radiation in Bose-Einstein condensates. Analogue gravity thus naturally leads to the question of whether such analogue models can confirm the existence of (gravitational) Hawking radiation in astrophysical black holes. Can we learn anything about black holes from these analogue models? More generally, can analogue models confirm hypotheses regarding inaccessible target systems? While Dardarshti et al. have argued that analogue gravity can indeed confirm gravitational Hawking radiation, Crowther et al. have criticized their argument as circular: in order to ascertain that the analogue model and black holes in fact fall into the same universality class, one must assume that black holes are adequately described by the modelling framework from which Hawking radiation is derived---but this is precisely what was to be confirmed. Analogue experiments whose target systems are inaccessible in some epistemically relevant sense generally suffer from this weakness: they must assume the physical adequacy of the modelling framework in order to underwrite the analogy to the accessible lab system when it is often precisely this adequacy which is at stake. Recently, Evans and Thébault (and to some degree Field) have equated concerns about this circularity with general inductive scepticism. Extrapolating from lab experiments on fluids to inaccessible black holes, is, according to them, no different in principle from inferring from today's experimental results to those of tomorrow. In support of this claim, they enlist the example of stellar nucleosynthesis, i.e., of reactions transforming hydrogen to helium in the interior of main sequence stars such as our sun. These reactions are unmanipulable and (photonically) inaccessible to us; nevertheless, astrophysical observations and terrestrial experiments in nuclear physics largely confirm the theory of stellar nucleosynthesis. This argument raises the subtle and rich question of under what circumstances we can think of two systems as being of the same or of a different 'type', i.e., under what conditions can experiments on one system be considered confirmatory of a theory on another system. This talk will show that this question is relevantly different from Hume's general inductive scepticism, particularly if it concerns inferences to inaccessible target systems. When we make inferences from experimental results of analogue systems to inaccessible target systems, we require the substantial assumption that our experimental and target systems are of the same type. It is for this assumption that we need independent support. Quantum gravity at low energies and high 01:15PM - 03:15PM
Presented by :
David Wallace, University Of Pittsburgh Although quantum gravity is often described as empirically inaccessible, in fact astrophysics and cosmology teem with situations in which both gravitational and quantum-mechanical effects are relevant, and so we have abundant observational constraints on quantum gravity at energy levels low compared to the Planck scale. That evidence supports (to a variable degree) the description of quantum gravity as an effective field theory version of general relativity, breaking down at Planckian energies. I briefly present this theory, review its empirical support and its problems (including the cosmological constant problem) and consider the prospects for gaining empirical evidence for quantum gravity beyond the low-energy regime. | ||
01:15PM - 03:15PM Board Room | The Origins of Belief Polarization Speakers
David Freeborn, University Of California, Irvine
Thomas Kelly, Princeton University
Kevin Dorst, University Of Pittsburgh
Jiin Jung, New York University
Moderators
Haixin Dang, University Of Nebraska Omaha Belief polarization occurs when individuals diverge in their beliefs about some hypothesis when updating on certain kinds of evidence. It a persistent feature in society, with important ramifications for scientific, political and cultural discourse. Conventionally, belief polarization has often been treated as a consequence of irrationality. However, a spate of recent work in philosophy, psychology, cognitive science has tried to better understand its causes. A number of authors in fields such as psychology and cognitive science, economics and philosophy have claimed that belief polarization arises even in rational agents, updating on the same evidence. Efforts to study belief polarization, its causes and consequences, have utilized a number of very different assumptions about the types of agents, their boundedness, the logical and probabilistic relationships between their beliefs and their epistemic relationships with other agents. This symposium will address the origins of belief polarization, and consider whether this picture is compatible with Bayesian rationality. This interdisciplinary symposium serves to connect several active research programs investigating the phenomenon of belief polarization from different perspectives in order to better understand the origins of belief polarization. It will present the state-of-the-art of the literature. Furthermore, it will serve to foster intellectual progress in the field by connecting scholars from a range of diverse and active research programs and backgrounds, including social epistemology, social political philosophy and cognitive science. Emergent Patterns of Collective Beliefs: Modeling Individual Belief Dynamics and Social Network Structures 01:15PM - 03:15PM
Presented by :
Jiin Jung, New York University This study investigates how belief dynamics and social network structures generate different patterns of social change and diversity. The two belief dynamics studied here are indirect minority influence and random drift; the former is parameterized by a leniency threshold ($\lambda$) and the later by an error rate ($\epsilon$). The patterns of social change are examined in terms of magnitude, speed, and frequency. Diversity and polarization are examined in terms of global belief variation (inverse Simpson index) and local neighborhood difference (Hamming distance). Key findings are that indirect minority influence robustly produces a gradual, small, yet frequent social change across various network structures. However, random drift produces a rapid punctuated social change especially in a society with high connectivity such as complete, scale-free, or random networks but gradual changes in lattice or small world networks. When a society has a modular community structure, indirect minority influence generates a diversity regime whereas random drift generates a polarized regime. Finally, distinct tipping points for social change were identified in different network structures.\end{abstract} Polarization is not (standard) Bayesian 01:15PM - 03:15PM
Presented by :
Kevin Dorst, University Of Pittsburgh Belief polarization is the tendency for individuals with opposing beliefs to predictably disagree more upon being exposed to certain types of evidence. A variety of recent papers have argued that many of the core empirical results surrounding this effect are consistent with standard Bayesian or approximately-Bayesian theories of rationality. I argue that this is wrong. While there are some types of predictable polarization that are consistent with standard Bayesian models, the core of the phenomenon is not. This core is the fact that when individuals face polarizing processes (such as exposure to mixed evidence or like-minded discussion), they can predict their own polarization. In any standard Bayesian model, Reflection is a theorem. Thus no standard-Bayesian model can explain the type of (Reflection-violating) predictable polarization we observe. The culprit in this result is not probabilism, but the assumption that updates occur by conditioning on partitional evidence. I show that—given the value of evidence as a constraint on rational updating—this partitionality assumption is equivalent to the requirement that rational credences are always introspective, i.e. that when it’s rational to have a given probability, it’s rational to be certain that it’s rational to have that probability. I suggest, therefore, that if Bayesian accounts of polarization are to succeed, they must do so by rejecting the assumption of rational introspection. Belief Polarization, Group Polarization, and Bias 01:15PM - 03:15PM
Presented by :
Thomas Kelly, Princeton University Belief polarization occurs when individuals with opposing initial beliefs strengthen their beliefs in response to the same evidence. In previous work (“Disagreement, Dogmatism, and Belief Polarization,” Journal of Philosophy 2008), I explored the hypothesis that the psychological mechanisms that give rise to belief polarization are rational ones, given what was then the best available account of those mechanisms provided by psychologists who had documented the phenomenon. In this talk I will further explore questions about the rationality of belief polarization in the light of the latest work in psychology, philosophy, and other disciplines. Particular attention will be devoted to questions about whether the reasoning that gives rise to belief polarization is biased reasoning, in an objectionable sense of “biased.” Such questions seem especially pressing given that, as is sometimes noted, even exemplary reasoning and paradigmatic episodes of knowledge acquisition are naturally described as involving certain epistemically innocuous or even beneficial biases. (Consider, for example, the ways in which vision scientists refer to the “biases” of our perceptual systems, without which perceptual knowledge would be impossible; or the ways in which cognitive scientists and philosophers seeking to understand exemplary inductive reasoning routinely speak of our “inductive biases.”) Finally, I consider the role that the mechanisms that give rise to belief polarization play in contexts of group polarization. In cases of group polarization, groups of like-minded individuals become increasingly extreme in their point of view as they share their opinions with one another, and thus, ever more polarized from other like-minded groups who begin with different opinions. I argue that although sharing evidence across different groups would often be socially desirable and epistemically beneficial, given plausible empirical assumptions it will often be practically rational for individuals within the groups to pass up opportunities to do so. \end{abstract} Belief polarization in agents with Bayesian Belief Networks 01:15PM - 03:15PM
Presented by :
David Freeborn, University Of California, Irvine Belief polarization occurs when the beliefs of agents diverge upon updating on certain types of evidence. Recent research indicates that belief can arise even amongst rational agents \cite{Jern_Polarization, Kelly_2008, O_Connor_Polarization}. Although the specific mechanisms differ, I distinguish two general origins of belief polarization. First is agent network-driven polarization \cite{Axelrod1997, HegselmannKrause, Macy2003, Deffluant2006, BaldassarriBearman, O_Connor_Polarization}, which arises due to the relationships between agents. With this form of polarization, epistemic influence between agents is determined by factors as the similarity in prior beliefs. The second origin is belief-network driven polarization \cite{Jern_Polarization, Kelly_2008}, which arises due to the relations between different beliefs held by agents. I argue that a formalism involving epistemic networks of agents, each with Bayesian belief networks allows us to represent both kinds of polarization in a unifying framework. I set out certain conditions under which each type of polarization can arise, in terms of the structure and relationships of the agents' epistemic and belief networks. | ||
01:15PM - 03:15PM Smithfield | Race, Science, and Race Science Speakers
Emilio Lobato, University Of California, Merced
Dan Hicks, University Of California, Merced
John Jackson, Michigan State University
M.A. Diamond-Hunter, London School Of Economics
Subrena Smith, University Of New Hampshire
Moderators
Alison McConwell, Assistant Professor, University Of Massachusetts, Lowell Efforts to rationalize racial injustice and colonialism by appealing to the epistemic authority of science - race science - have waxed and waned over the last several decades. Even when it is regarded as discredited or pseudoscientific, race science has been actively maintained on the fringes of mainstream scientific communities, and practitioners have shown remarkable ingenuity in appropriating cutting-edge research methods and organizational forms, including behavioral genetics in the 1960s, genomics in the 2000s, and open access publishing in the 2010s. This interdisciplinary symposium will apply techniques from across the history, philosophy, and social studies of science to offer critiques of the claims, methods, and organization of race science. The first two talks discuss race science in the second half of the twentieth century, from computational social science (Lobato and Hicks) and historical (Jackson) perspectives. The second two talks use more typical philosophy of science approaches, examining the appropriation of population biology by the "human biodiversity" movement (Diamond-Hunter) and the scientific standing of race science (Smith). Race: It’s Just Not Science 01:15PM - 03:15PM
Presented by :
Subrena Smith, University Of New Hampshire Race science recruits scientific work in the biological, behavioral, and social sciences in the service of legitimating the presupposition that there are biological races which map on to social racial systems. But the biological structure of human populations is not synonymous with particular racial social order. Racial science is not science about biologically discrete populations. Such work serves to stamp social arrangements and outcomes with scientific respectability. In the public’s mind it is consequential that scientists have found correlations between certain kinds of aptitude and certain groups of people; or that they have found a disease with greater frequency amongst people with certain external physical characteristics; or that people from certain social classes endure lives of greater need. I aim to show in this talk that racial science is scientifically bankrupt. I hope to make obvious that despite the gloss of science; such work has no standing. We should refrain from calling this scientific work. Population Biology and the implicit scientific backing of the “Human Biodiversity” movement 01:15PM - 03:15PM
Presented by :
M.A. Diamond-Hunter, London School Of Economics Directly after the release of Nicholas Wade’s *A Troublesome Inheritance*, population geneticists, biologists, and biomedical researchers wrote an open letter to the *New York Times* stating that “We reject Wade’s implication that our findings substantiate his guesswork." Given their clear denunciation of Wade's book, it seems that the furor should be over. This paper, however, argues otherwise. This paper makes the argument that a number of population geneticists have done enough work in their own reputable academic publications over the last two decades to provide fertile ground and academic justification for repugnant and racist views. The Scientific Racism of Arthur Jensen 01:15PM - 03:15PM
Presented by :
John Jackson, Michigan State University Arthur Jensen (1923-2012) was one of the most prolific and well-cited psychologists of the twentieth century. We have two pictures of Arthur Jensen. The first is the meticulous and careful psychologist crowned “a king among men” by his colleagues. The second Jensen repeatedly voiced eugenicist concerns about the genetic deterioration of society. The second Jensen lent his name to neo-Nazi organizations and figures and published research with racial segregationists. I argue that there is only one Arthur Jensen. His political allies and affiliations with reactionary and racist figures are embedded in his psychological work. Mainstreaming Scientific Racism 01:15PM - 03:15PM
Presented by :
Dan Hicks, University Of California, Merced
Emilio Lobato, University Of California, Merced In the mid-twentieth century, as mainstream scientific opinion turned away from eugenics and the most explicit versions of race science, two organizations were formed to preserve and continue research in defense of white supremacy. The Pioneer Fund has supported and the journal Mankind Quarterly has published the work of researchers such as Hans Eysenck (University College London), Arthur Jensen (UC Berkeley), and J. Philippe Rushton (University of Western Ontario). In this talk we use text mining methods and Fernández Pinto's analysis of the "tobacco strategy" to examine the ways in which Pioneer and Mankind Quarterly legitimized race science. | ||
01:15PM - 03:15PM Sterlings 3 | Scientific Realism, Metaphysics, and Epistemic Stances Speakers
Amanda Bryant, Ryerson University
Anjan Chakravartty, University Of Miami
Kerry Mckenzie, Presenter , University Of California, San Diego
Christopher Pincock, Ohio State
Moderators
Jason DeWitt, Moderator, The Ohio State University An epistemic stance is an attitude or orientation of an agent that determines whether or not that agent's evidence justifies their claims to know. Stances have been thought to involve debatable policies, values, and aims that distinguish, for example, the scientific realist from the anti-realist. This symposium will consider how those sympathetic to scientific realism and scientific metaphysics should conceive of these stances. If a stance is necessary to defend one's claims to know about unobservable entities of various kinds, should the realist admit the possibility of other, non-realist stances? If so, how should a realist stance be motivated and defended? The four contributors to this symposium provide four different answers to these questions and their relevance to broader issues in the philosophy of science, including rationality, evidence, explanation, and mathematical structure. Defending the Realist Stance 01:15PM - 03:15PM
Presented by :
Christopher Pincock, Ohio State I argue that realism requires a stance, but that the realist should maintain that their stance is the only rationally permissible one. The basic motivation for maintaining that only a realist stance is rationally permissible is that being more open-minded induces a kind of pragmatic incoherence on the part of the realist (Psillos 2021). A realist cannot maintain their defense of realism while admitting that this defense requires adopting a policy that others are rationally permitted to ignore. For this is tantamount to admitting that they have no defense of their own realism. Structuralism as a Stance 01:15PM - 03:15PM
Presented by :
Kerry Mckenzie, Presenter , University Of California, San Diego I argue that considerations analogous to those van Fraassen raises in connection with physicalism support regarding ontic structural realism as a stance also. Like physicalists, structuralists prescind from defining structure too carefully, in large part because they want the notion of structure to be open to future scientific developments. And structuralists have also allowed the term ‘structure’ to come to cover aspects of the world they themselves previously presented as antithetical. For these reasons, I propose that rather than a doctrine about how the world fundamentally is, structuralism should be viewed as a kind of stance. Resolving Debates about Realism: The Challenge from Stances 01:15PM - 03:15PM
Presented by :
Anjan Chakravartty, University Of Miami Epistemic stances are collections of attitudes, values, aims, and policies relevant to assessing evidence, eventuating in belief or agnosticism in relation to scientific theories and models. If more than one stance is permissible, this would seem to undermine certain debates between scientific realists and antirealists. In reply to skepticism about this, I argue that: (1) hopes for a shared basis for assessing evidence to serve as a neutral arbiter either beg the question against one side of the debate, or are insufficiently probative; and (2) rejecting the superior rationality of stances supporting realism does not amount to skepticism about science. Epistemic Stances, Naturalization, and Naturalism 01:15PM - 03:15PM
Presented by :
Amanda Bryant, Ryerson University My aim in this paper will be to explore which deeper and more general epistemic stances underlie methodological naturalism. In particular, I aim to consider whether the same epistemic stance that underlies scientific realism must also underlie methodological naturalism. Since it is often assumed that realism is a prerequisite of methodological naturalism, one might think that they share an underlying stance in common. However, I will argue that this is not clearly the case. I will also consider whether methodological naturalism must stem from a distinctively scientistic stance. I will argue that this, too, is not clearly the case. | ||
01:15PM - 03:15PM Fort Pitt | UPSS Session Speakers
GE FANG, Washington University In St. Louis
Jennifer Whyte, Jennifer Whyte
Yosef Washington, University Of Pennsylvania
Chia-Hua Lin, Institute Of European And American Studies, Academia Sinica, Taiwan
Moderators
Gabrielle Kerbel, University Of Michigan - Ann Arbor A focus on cultural competences for understanding cumulative cultural evolution 01:15PM - 03:15PM
Presented by :
GE FANG, Washington University In St. Louis Mathematical SETIbacks: Open Texture in Mathematics as a new challenge for Messaging Extra-Terrestrial Intelligence 01:15PM - 03:15PM
Presented by :
Jennifer Whyte, Jennifer Whyte Two of One Kind: The Bio-Social Existence and Race 01:15PM - 03:15PM
Presented by :
Yosef Washington, University Of Pennsylvania Explaining the Success of Transdisciplinary Modeling 01:15PM - 03:15PM
Presented by :
Chia-Hua Lin, Institute Of European And American Studies, Academia Sinica, Taiwan | ||
01:15PM - 03:15PM Birmingham | Climate and Sustainability Speakers
Miles MacLeod, University Of Twente
Stuart Gluck, Adviser To The Director Of The Office Of Science, AAAS / Department Of Energy
Ryan O'Loughlin, Assistant Professor, Queens College CUNY
Corey Dethier, Presenter, Leibniz Universität Hannover
Moderators
Marina Baldissera Pacchetti, Research Fellow, University Of Leeds Rethinking social robustness: participatory modeling and values in sustainability science 01:15PM - 03:15PM
Presented by :
Miles MacLeod, University Of Twente Participatory modeling in sustainability science allows scientists to take stakeholders’ interests, knowledge and values into account when designing model-based solutions to sustainability problems, by incorporating stakeholders in the model-building process. This improves the chance of generating socially robust knowledge and consensus on solutions. Part of what helps in this regard is that scientists, through involving stakeholders, limit their own values from influencing the outcome, thus achieving some level of value-neutrality. We argue that while it might achieve this to some extent, it comes at a cost to the reliability of the outcomes, which is ethically problematic. Robustness of Climate Models 01:15PM - 03:15PM
Presented by :
Stuart Gluck, Adviser To The Director Of The Office Of Science, AAAS / Department Of Energy Robustness of climate models is considered by philosophers of climate science to be a crucial issue in determining whether and to what extent the projections of the Earth’s future climate that models yield should be trusted—and in turn whether society should pursue policies to address mitigation of and adaptation to anthropogenic climate change. Parker (2011) and Lloyd (2009, 2015) have introduced influential accounts of robustness for climate models with seemingly conflicting conclusions. I argue that Parker and Lloyd are characterizing distinct notions of robustness and that confidence in the projections is warranted in virtue of confidence in the models. Competition and pluralism in climate modeling 01:15PM - 03:15PM
Presented by :
Ryan O'Loughlin, Assistant Professor, Queens College CUNY It has been argued that climate modeling can be partially characterized as exhibiting ontic competitive pluralism (i.e., that models compete for truth in some sense). I argue that (1) because climate models are all of the same model-type, they are not ontic competitors; instead (2) they compete in terms of local skill. Counterintuitively, locally poor performing models sometimes yield epistemic benefits for scientists, as demonstrated by the emergent constraints literature. Against “Possibilist” Interpretations of Climate Models 01:15PM - 03:15PM
Presented by :
Corey Dethier, Presenter, Leibniz Universität Hannover Climate scientists frequently employ heavily idealized models. How should these models be interpreted? Some philosophers have promoted a possibilist interpretation, where climate models stand in for possible scenarios that could occur, but don't provide information about how probable those scenarios are. The present paper argues that possibilism is undermotivated, incompatible with successful practices in the science, and liable to present a less accurate than probabilistic alternatives. There are good arguments to be had about how to interpret climate models but our starting point should be that the models provide evidence relevant to the evaluation of hypotheses concerning the actual world. | ||
01:15PM - 03:15PM Duquesne | Simulation and modeling Speakers
Aki Lehtinen, Nankai University
Kevin Kadowaki, Washington University In St. Louis
Tim Elmo Feiten, University Of Cincinnati
Bert Baumgaertner, University Of Idaho
Moderators
Travis LaCroix, Dalhousie University Imagination and fiction in modelling; an epistemic critique 01:15PM - 03:15PM
Presented by :
Aki Lehtinen, Nankai University This paper criticises the Waltonian fiction view for providing a misleading role of imagination in scientific modelling, and for failing to provide an adequate account of the epistemology of modelling. Imagination cannot be simultaneously constrained by the model descriptions and relevant for modelling epistemology. Given that the relevant inferences must be made in terms of publicly available model descriptions, and the laws and general principles must be included in the model descriptions, there can be no relevant role for the so-called indirect principles of generation in modelling epistemology. Simulation and Adequacy-for-Purpose 01:15PM - 03:15PM
Presented by :
Kevin Kadowaki, Washington University In St. Louis Large-scale numerical simulations are increasingly used for scientific investigation; however, given that they are often needed precisely because ordinary experimental and observational methods cannot be used, their epistemic justification is often in question. Drawing on the adequacy-for-purpose framework, I characterize the problem of model assessment under conditions of scarce empirical evidence. I argue that, while a single model may not suffice under these conditions, a suitable collection of models may be used in concert to advance a community's scientific understanding of a target phenomena and provide a foundation for the progressive development of more adequate models. The Map/Territory Relationship in Game-Theoretic Modeling of Cultural Evolution 01:15PM - 03:15PM
Presented by :
Tim Elmo Feiten, University Of Cincinnati The cultural red king effect occurs when discriminatory bargaining practices emerge because of a disparity in learning speed between members of a minority and a majority. This effect has been shown to occur in some Nash Demand Game models and has been proposed as a tool for shedding light on the origins of sexist and racist discrimination in academic collaborations. This paper argues that none of the three main strategies used in the literature to support the epistemic value of these models—structural similarity, empirical confirmation, and how-possibly explanations—provides strong support for this modeling practice in its present form. Precedent and Interpersonal Convergence in the Method of Reflective Equilibrium 01:15PM - 03:15PM
Presented by :
Bert Baumgaertner, University Of Idaho We present a computational model of reflective equilibrium with precedent. Each agent considers a rule by which to accept or reject cases. Cases are represented as labeled binary strings: intuitive accept, intuitive reject, or no intuition. Rules are represented as a pair: a binary string and a tolerance threshold determining if a case is a close enough match to accept. Rule-updates are driven by intuitions about cases and precedents set by other agents. We compare four networks: empty, ring, 4-regular, and complete. Results suggest that increasing connectivity encourages, but doesn't guarantee, interpersonal convergence on a single reflective equilibrium. | ||
01:15PM - 03:15PM Forbes | Formal Epistemology Speakers
Peter Lewis, Dartmouth College
Veronica Vieland, Professor Emerita, The Ohio State University
Jonah Schupbach, University Of Utah
Marina Dubova, PhD Student, Indiana University
Moderators
Kenny Easwaran, Reviewer, Texas A&M Accuracy-first epistemology and scientific progress 01:15PM - 03:15PM
Presented by :
Peter Lewis, Dartmouth College The accuracy-first program attempts to ground epistemology in the norm that one’s beliefs should be as accurate as possible, where accuracy is measured using a scoring rule. We argue that considerations of scientific progress suggest that such a monism about epistemic value is untenable. In particular, we argue that counterexamples to the standard scoring rules are ubiquitous in the history of science, and hence that these scoring rules cannot be regarded as a precisification of our intuitive concept of epistemic value. Absolutely Zero Evidence 01:15PM - 03:15PM
Presented by :
Veronica Vieland, Professor Emerita, The Ohio State University Statistical analysis is often used to evaluate the strength of evidence for or against scientific hypotheses. Here we consider evidence measurement from the point of view of representational measurement theory, focusing in particular on the 0-points of measurement scales. We argue that a properly calibrated evidence measure will need to count up from absolute 0, in a sense to be defined, and that this 0-point is likely to be something other than what one might have expected. This suggests the need for a new theory of statistical evidence in the context of which calibrated evidence measurement becomes tractable. On the Logical Structure of Best Explanations 01:15PM - 03:15PM
Presented by :
Jonah Schupbach, University Of Utah Standard articulations of Inference to the Best Explanation (IBE) imply the uniqueness claim that exactly one explanation should be inferred in response to an explanandum. This claim has been challenged as being both too strong (sometimes agnosticism between candidate explanatory hypotheses seems the rational conclusion) and too weak (in cases where multiple hypotheses might sensibly be conjointly inferred). I propose a novel interpretation of IBE that retains the uniqueness claim while also allowing for agnostic and conjunctive conclusions. I then argue that a particular probabilistic explication of explanatory goodness helpfully guides us in navigating such options when using IBE. Against theory-motivated data collection in science 01:15PM - 03:15PM
Presented by :
Marina Dubova, PhD Student, Indiana University We study the epistemic success of different data collection strategies. We develop a computational multi-agent model of the scientific process that jointly formalizes its core aspects: data collection, data explanation, and social learning. We find that agents who choose new experiments at random develop the most accurate accounts of the world. On the other hand, the agents following the confirmation, falsification, crucial experimentation (theoretical disagreement), or novelty-motivated strategies end up with an illusion of epistemic success: they develop promising accounts for the data they collected, while completely misrepresenting the ground truth that they intended to learn about. | ||
01:15PM - 03:15PM Benedum | Causation and Explanation in Biology Speakers
Mel Fagan, University Of Utah
Celso Neto, University Of Exeter
Caleb Hazelwood, Speaker, Duke University
Jordan Dopkins, University Of California, Santa Cruz
Moderators
Viorel Pâslaru, Poster Presenter And Sessions Chair, University Of Dayton Immunology and human health: collaboration without convergence 01:15PM - 03:15PM
Presented by :
Mel Fagan, University Of Utah Immunology is a notoriously complex field with a distinct vocabulary and concepts. Yet immunologists regularly and effectively collaborate with other researchers, notably clinicians and experts in population health. How does this work? This paper proposes a multifaceted answer. Immunology exhibits three features that support collaborative research without a shared vocabulary and concepts: a multifaceted target of inquiry, therapeutic aspirations, and a clear interdisciplinary pathway. Building on these features, I sketch a general account of “low-effort interdisciplinarity” and connect this result to recent work on population health. I conclude by discussing the broader significance of low-effort interdisciplinarity. Scaffold: A Causal Concept for Evolutionary Explanations 01:15PM - 03:15PM
Presented by :
Celso Neto, University Of Exeter The concept of scaffold is widespread in science and increasingly common in evolutionary biology (Chiu and Gilbert 2015; Love and Wimsatt 2019; Black et al. 2020). While this concept figures in causal explanations, it is far from clear what scaffolds are and what role they play in those explanations (Charbonneau 2015). Here we present evolutionary scaffolding explanation as a distinct type of explanatory strategy, distinguishing it from other types of causal explanation in evolutionary biology. By doing so, we clarify the meaning of “scaffold” as a causal concept and its potential contribution to accounts of evolutionary novelty and major transitions. An Emerging Dilemma for Reciprocal Causation 01:15PM - 03:15PM
Presented by :
Caleb Hazelwood, Speaker, Duke University Reciprocal causation is the view that adaptive evolution is a bidirectional process, whereby organisms and environments impinge on each other through cycles of niche construction and natural selection. I argue, however, that reciprocal causation is incompatible with the recent view that natural selection is a metaphysically emergent causal process. The emergent character of selection places reciprocal causation on the horns of dilemma, and neither horn can rescue the causal interdependency between selection and niche construction. Therefore, I conclude that proponents of reciprocal causation must abandon the claim that the process of natural selection features in cycles of reciprocal causation. Laboratories, Natural Environments, and the Distinction between Proximal and Distal Cues. 01:15PM - 03:15PM
Presented by :
Jordan Dopkins, University Of California, Santa Cruz Organisms use cues differently as they navigate their environments. One distinction researchers use to characterize differences between cues is the distinction between proximal and distal cues. The standard way of thinking about this distinction involves thinking of distal cues as beyond an experiment apparatus and proximal cues as within that apparatus. I argued that there is a problem with thinking about the distinction this way; there are no proximal or distal cues in natural environments and so the cues are not explanatory of behaviors in those environments. Then, I recommended a new way of thinking about the distinction. | ||
01:15PM - 03:15PM Sterlings 2 | Mechanisms and Understanding Speakers
Armond Duwell, University Of Montana
Kalewold Kalewold, Stanford University
William Bechtel, University Of California, San Diego
Yoshinari Yoshida, Graduate Student, University Of Minnesota
Moderators
Derek Skillings, University Of North Carolina At Greensboro Mechanisms and Principles: Two Kinds of Scientific Generalization 01:15PM - 03:15PM
Presented by :
Yoshinari Yoshida, Graduate Student, University Of Minnesota Many philosophers have explored the extensive use of non-universal generalizations in different sciences for inductive and explanatory purposes, analyzing properties such as how widely a generalization holds in space and time. We concentrate on developmental biology to distinguish and characterize two kinds of scientific generalizations—mechanisms and principles—that correspond to different explanatory aims. Our analysis shows why each kind of generalization is sought in a research context, thereby accounting for how the practices of inquiry are structured. It also diagnoses problematic assumptions in prior discussions, such as abstraction always being correlated positively with generalizations of wide scope. Unification and Understanding: The Modal View 01:15PM - 03:15PM
Presented by :
Armond Duwell, University Of Montana It is common to distinguish classificatory, physical, and formal unification. Of these, only physical unification seems to have anything to do with explanation and hence understanding. In this paper, I argue that that view is incorrect. Classificatory and formal unification facilitate understanding. Moreover, I argue that theories that physically unify do not necessarily facilitate understanding better than non-unified theories. Good Parts and the Explanatory Mosaic of Science: A Carving Standard for the Philosophy of Mechanisms 01:15PM - 03:15PM
Presented by :
Kalewold Kalewold, Stanford University New mechanists forward an influential account of mechanisms in which entities (or parts) and their activities are organized so as to produce the phenomenon that calls out for explanation; and to explain is to describe that mechanism. However, critics charge that new mechanists have not provided a standard for identifying and individuating parts that blocks gerrymandered parts. To remedy this, I defend a carving principle that justifies the standard parts that are included as components of mechanistic explanations. My account grounds good parthood in robust explanatory relations I call the explanatory mosaic of science. Developing Models from Static Images of How Constrained Release of Free Energy Produces Work in Biological Mechanisms 01:15PM - 03:15PM
Presented by :
William Bechtel, University Of California, San Diego Philosophical accounts of biological mechanisms have only recently attended to the crucial role free energy plays in enabling the operation of mechanisms and have not addressed how scientists discover the role of free energy in the operation of biological mechanisms. To do so, I examine research on two mechanisms—the myosin motor in muscle contraction and the cyanobacterial circadian clock. I describe the discovery process in which researchers compare static images to determine the conformation changes in proteins produced by ATP hydrolysis and infer how these conformation changes generate forces that result in the phenomenon produced by the mechanism. | ||
03:15PM - 03:45PM Virtual Room | Coffee Break | ||
03:45PM - 05:45PM Sterlings 2 | Philosophy of Quantum Mechanics 1 Speakers
Francisco Calderón, PhD Student, University Of Michigan - Ann Arbor
Charles Sebens, Caltech
Jessica Oddan, Doctoral Candidate, University Of Waterloo
Laura Ruetsche, University Of Michigan - Ann Arbor
Moderators
Soazig Le Bihan, University Of Montana The Causal Axioms of Algebraic Quantum Field Theory: A Diagnostic 03:45PM - 05:45PM
Presented by :
Francisco Calderón, PhD Student, University Of Michigan - Ann Arbor This paper examines the axioms of algebraic quantum field theory (AQFT) that aim to characterize the theory as one that implements relativistic causation. I suggest that the spectrum condition (SC), microcausality (MC), and primitive causality axioms (PC), taken individually, fail to fulfill this goal, against what some philosophers have claimed. Instead, I will show that the "local primitive causality" (LPC) condition captures each axiom's advantages. This claim will follow immediately from a construction that makes explicit that SC, MC, and PC, taken together, imply LPC. Eliminating Electron Self-Repulsion 03:45PM - 05:45PM
Presented by :
Charles Sebens, Caltech To understand how problems of self-interaction are to be addressed in quantum electrodynamics, we can start by analyzing a classical theory of the Dirac and electromagnetic fields. In such a classical field theory, the electron has a spread-out distribution of charge that avoids some problems of self-interaction facing point charge models. However, there remains the problem that the electron will experience self-repulsion. This self-repulsion cannot be eliminated within classical field theory, but it can be eliminated from quantum electrodynamics in the Coulomb gauge by fully normal-ordering the Coulomb term in the Hamiltonian. Reconstructions of Quantum Theory as Successors to the Axiomatic Method 03:45PM - 05:45PM
Presented by :
Jessica Oddan, Doctoral Candidate, University Of Waterloo Reconstructions of quantum theory are a novel research program in theoretical physics aiming to uncover the unique physical features of quantum theory via axiomatization. I argue that reconstructions represent a modern usage of the axiomatic method as successors to von Neumann’s axiomatizations in quantum mechanics. The key difference between von Neumann’s applications and Hardy’s “Quantum Theory from five reasonable axioms” (Hardy 2001) is that von Neumann did not have an established mathematical formalism to base his axiomatization on, whereas Hardy uses an established formalism as a constraint, which is a unique feature of the axiomatic method in the reconstruction programme. UnBorn: Probability in Bohmian Mechanics 03:45PM - 05:45PM
Presented by :
Laura Ruetsche, University Of Michigan - Ann Arbor Why are quantum probabilities encoded in measures corresponding to wave functions, rather than by a more general (or more specific) class of measures? Whereas orthodox quantum mechanics has a compelling answer to this question, Bohmian mechanics might not. | ||
03:45PM - 05:45PM Benedum | Intersection of values and health Speakers
Cruz Davis, Co-author, UMass Amherst
Adrian Erasmus, University Of Alabama
David Merli, Franklin & Marshall College
Bennett Knox, PhD Student, University Of Utah
Rebecca Dreier, University Of Tübingen Department Of Philosophy
Alison Springle, Assistant Professor , University Of Oklahoma Department Of Philosophy
Moderators
Serife Tekin, Poster Chair, University Of Texas At San Antonio Where's the value in health? 03:45PM - 05:45PM
Presented by :
David Merli, Franklin & Marshall College
Cruz Davis, Co-author, UMass Amherst Traditional theories of health and disease have a tendency to focus on either the evaluative aspect of health at the cost of capturing its descriptive character or they focus on the descriptive character at the cost of capturing its evaluative aspect. We provide a naturalistically respectable account of health that captures both these features of health by locating the value in health in the mode of presentation of the concept [health] instead of in the worldly property. We argue that understanding [health] as a thick concept allows us to make good sense of important features of health judgments. The Bias Dynamics Model: Correcting for Meta-Biases in Therapeutic Prediction 03:45PM - 05:45PM
Presented by :
Adrian Erasmus, University Of Alabama Inferences from clinical research results to estimates of therapeutic effectiveness suffer due to various biases. I argue that predictions of medical effectiveness are prone to failure because current medical research overlooks the impact of a particularly detrimental set of biases: meta-biases. Meta-biases are linked to higher-level characteristics of medical research and their effects are only observed when comparing sets of studies that share certain meta-level properties. I offer a model for correcting research results based on meta-research evidence, the bias dynamics model, which employs regularly updated empirical bias coefficients to attenuate estimates of therapeutic effectiveness. The Institutional Definition of Psychiatric Condition and the Role of Well-Being in Psychiatry 03:45PM - 05:45PM
Presented by :
Bennett Knox, PhD Student, University Of Utah This paper draws on Quill Kukla’s “Institutional Definition of Health” to provide a definition of “psychiatric condition” that delineates the proper bounds of psychiatry. I argue that this definition must include requirements that psychiatrization of a condition benefit the well-being of 1) the society as a collective, and 2) the individual whose condition is in question. I then suggest that psychiatry understand individual well-being in terms of the subjective values of individuals. Finally, I propose that psychiatry’s understanding of collective well-being should be the result of a “socially objective” process, and give certain desiderata for this understanding. Trusting Traumatic Memory: Considerations from Memory Science 03:45PM - 05:45PM
Presented by :
Alison Springle, Assistant Professor , University Of Oklahoma Department Of Philosophy
Rebecca Dreier, University Of Tübingen Department Of Philosophy Court cases involving sexual assault and police violence rely heavily on victim testimony. We consider what we call the “Traumatic Untrustworthiness Argument (TUA)” according to which we should be skeptical about victim testimony because people are particularly liable to misremember traumatic events. The TUA is not obviously based in mere distrust of women, people of color, disabled people, poor people, etc. Rather, it seeks to justify skepticism on epistemic and empirical grounds. We consider how the TUA might appeal to the psychology and neuroscience of memory for empirical support. However, we argue that neither support the TUA. | ||
03:45PM - 05:45PM Duquesne | Values in classification Speakers
Muhammad Ali Khalidi, Presenter, City University Of New York, Graduate Center
P.D. Magnus, University At Albany, State University Of New York
Mark Risjord, Professor, Emory University
Richard Lauer, Saint Lawrence University
Moderators
Peter Zachar, Presenter And Session Chair, Auburn University Montgomery Pluralism about Kinds and the Role of Non-Epistemic Values 03:45PM - 05:45PM
Presented by :
Muhammad Ali Khalidi, Presenter, City University Of New York, Graduate Center This paper relates discussions of scientific ontology to debates about the value-ladenness of science. First, I distinguish three types of pluralism about kinds and argue that none of them threatens realism. Then I argue that pluralist realism about kinds has implications for the debate about the role of non-epistemic values in science. Pluralist realists hold that there are more kinds than we will ever have the resources to focus on. Hence, while epistemic values are responsible for identifying kinds, non-epistemic values can play a role in deciding which ones to focus on in scientific theory and practice. Scurvy and the ontology of natural kinds 03:45PM - 05:45PM
Presented by :
P.D. Magnus, University At Albany, State University Of New York Some philosophers understand natural kinds to be the categories which are constraints on enquiry. In order to elaborate the metaphysics appropriate to such an account, I consider the complicated history of scurvy, citrus, and vitamin C. It may be tempting to understand these categories in a shallow way (as mere property clusters) or in a deep way (as fundamental properties). Neither approach is adequate, and the case instead calls for middle-range ontology: starting from categories which we identify in the world and elaborating their structure, but not pretending to jump ahead to a complete story about fundamental being. Purposes and Politics: Scientific Racism and the Empirical Constraints on Model Choice 03:45PM - 05:45PM
Presented by :
Mark Risjord, Professor, Emory University The 1950/51 UNESCO statements on race were opposed by a group of scientists who rejected the post-WWII scientific consensus that the human species does not divide neatly into races. Both sides of this dispute had explicit political purposes. The dispute turned on a difference between two models of the human species. Many accounts make model evaluation depend on the user's purposes. Applied to this case, such views render political the empirical question about whether races exist. This essay argues that there are empirical constraints on model use that are independent of and constrain the user's purposes. Is Race Like Phlogiston? 03:45PM - 05:45PM
Presented by :
Richard Lauer, Saint Lawrence University Is race real? If so, what exactly is it? These questions have captivated both philosophers and social scientists alike. Participants in these debates frequently appeal to race’s role in explaining various social phenomena, though they rarely engage with empirical social science. In this paper, we will argue that the kinds of empirical success that race enjoys in the social sciences do not support the claim that races, as used in social science research, are accurately represented. In fact, we shall argue that race's empirical success appears to be less than that of the phlogiston theory. | ||
03:45PM - 05:45PM Sterlings 3 | Thermodynamics and history of physics Speakers
Michael Veldman, Presenter, Duke
Eugene Y. S. Chua, Presenter, UC San Diego
Wayne Myrvold, The University Of Western Ontario
Katie Robertson, University Of Birmingham
Moderators
Mahmoud Jalloh, University Of Southern California Going to Where the Action Is: The Philosophical Significance of the Principle of Least Action 03:45PM - 05:45PM
Presented by :
Michael Veldman, Presenter, Duke A venerable narrative says that teleology died during the Scientific Revolution. McDonough (2020) recently objected that the principle of least action (PLA) shows teleology surviving into Enlightenment physics. Both narratives get the story wrong. The PLA’s history shows that what really happened is much more interesting. When a metaphysical principle became specifiable as a precise physical principle using increasingly sophisticated mathematics, it faced mathematical challenges to its viability, a new norm constraining metaphysics emerges and, far from surviving, teleology in physics is put to rest by novel means. T Falls Apart: On the Status of Classical Temperature in Relativity 03:45PM - 05:45PM
Presented by :
Eugene Y. S. Chua, Presenter, UC San Diego Taking the formal analogies between black holes and classical thermodynamics seriously seems to first require that classical thermodynamics applies in relativistic regimes. Yet, by scrutinizing how classical temperature is extended into special relativity, I argue that it falls apart. I examine four consilient procedures for establishing classical temperature: the Carnot process, the thermometer, kinetic theory, and black-body radiation. I show how their relativistic counterparts demonstrate no such consilience in defining relativistic temperature. As such, classical temperature doesn’t appear to survive a relativistic extension. I suggest two interpretations for this situation: eliminativism akin to simultaneity, or pluralism akin to rotation. Two Conceptions of Thermodynamics 03:45PM - 05:45PM
Presented by :
Wayne Myrvold, The University Of Western Ontario Two conceptions of thermodynamics are distinguished. On one, thermodynamics is a resource theory, a theory about how agents with specified means of manipulating a physical system can exploit its physical properties to achieve specified ends, such as obtaining useful work. On the other, thermodynamics has been severed from its roots in technological considerations, and is a theory of the macroscopic bulk properties of matter. I argue that the envisaged severance has not and cannot be wholly achieved, and that recognizing this sheds light on the philosophical conundrums associated with thermodynamics, in particular its relation to statistical mechanics. Is thermodynamics subjective? 03:45PM - 05:45PM
Presented by :
Katie Robertson, University Of Birmingham Thermodynamics is an unusual theory. Prominent figures, including J.C. Maxwell and E.T. Jaynes, have suggested that thermodynamics is anthropocentric. Additionally, fruitful contemporary approaches to quantum thermodynamics label thermodynamics a ‘subjective theory’. Here, we evaluate some of the strongest arguments for anthropocentrism based on the heat/work distinction, the second law, and the nature of entropy. We show that these arguments do not commit us to an anthropocentric view but instead point towards a resource-relative understanding of thermodynamics which can be shorn of the ‘subjective gloss’. | ||
03:45PM - 05:45PM Board Room | Decision Theory Speakers
Remco Heesen, University Of Western Australia And University Of Groningen
Xin Hui Yong, University Of Pittsburgh
Milana Kostic, Speaker, University Of California, San Diego
Tomasz Wysocki, University Of Pittsburgh HPS
Moderators
Tianqin Ren, University Of Missouri How to Measure Credit 03:45PM - 05:45PM
Presented by :
Remco Heesen, University Of Western Australia And University Of Groningen Research on the credit economy in academic science has typically assumed that academics are expected credit maximizers without argument. How might this assumption be justified? And how might the measure-theoretic foundations of credit be secured? Two approaches are considered. One in which credit is equated with citations and one, based on von Neumann's expected utility theorem, in which credit is constructed from academics' preferences over lotteries among research records. The latter is shown to be the weakest possible defense of expected credit maximization in a formally precise sense, so those assuming expected credit maximization are committed to it. Accidentally I learnt: On Relevance and Information Resistance 03:45PM - 05:45PM
Presented by :
Xin Hui Yong, University Of Pittsburgh Despite efforts to teach agents about their privilege by minimizing cost of information, Kinney & Bright argue risksensitive frameworks like Buchak's allow privileged agents to rationally shield themselves from this costless and relevant information. In response, I show that uncertainty about information's relevance may block one from rationally upholding ignorance. I explore the implications and interpretations of the agent's uncertainty; these educational initiatives may not be as doomed as suggested, and agents may feel better having learned something but rationally decline to learn it now. This has upshots for the viability of risksensitive expected utility theory in explaining elitegroup ignorance. On the Utility of Research into Geoengineering Strategies for Risk-avoidant Agents 03:45PM - 05:45PM
Presented by :
Milana Kostic, Speaker, University Of California, San Diego In a recent paper Winsberg (2021) argues in favor of research into geoengineering by relying on Good's theorem, which states that conducting research maximizes one's expected utility. However, Good's theorem sometimes fails for risk-avoidant agents (Buchak 2010). Since risk-avoidance captures some of the 'precautionary' intuitions that critics of geoengineering share, it is important to see if research into geoengineering would maximize one's utility if risk-avoidance is taken into account. I show that there are further considerations to be taken into account if one wants to conclude that conducting research into geoengineering maximizes utility based on Good's results. Causal Decision Theory for The Probabilistically Blindfolded 03:45PM - 05:45PM
Presented by :
Tomasz Wysocki, University Of Pittsburgh HPS If you can’t or don’t want to ascribe probabilities to the consequences of your actions, classic causal decision theory won’t let you reap the undeniable benefits of causal reasoning for decision making. The following theory fixes this problem. I explain why it’s good to have a causal decision theory that applies to non-deterministic yet non-probabilistic decision problems. I then introduce the underdeterministic framework and subsequently use it to formulate underdeterministic decision theory. The theory applies to decisions with infinitely many possible consequences and to agents who can’t decide on a single causal model representing the decision problem. | ||
03:45PM - 05:45PM Fort Pitt | Biology and Values Speakers
David Teira, UNED
Oriol Vidal, Universitat De Girona
Heather Browning, Lecturer, University Of Southampton
Nicholas Evans, University Of Massachusetts, Lowell
Charles Pence, Université Catholique De Louvain
Derek Halm, University Of Utah
Moderators
Sarah Roe, Southern Connecticut State University Are animal breeds social kinds? 03:45PM - 05:45PM
Presented by :
David Teira, UNED
Oriol Vidal, Universitat De Girona Breeds are classifications of domestic animals that share a set of conventional phenotypic traits. We claim that, despite classifying biological entities, animal breeds are social kinds. Breeds originate in a social mechanism (artificial selection) by which humans dominate the agency of certain animals about their reproductive choices. The stability of breeds is typical of social, not biological kinds: they allow for scientific predictions but, like any other social kind, once the social forces sustaining the classification vanish, so does the kind. Breeds provide a simple scale model to discuss intervention on more complex social kinds like race or gender. Validating indicators of animal welfare 03:45PM - 05:45PM
Presented by :
Heather Browning, Lecturer, University Of Southampton Measurement of subjective animal welfare creates a special problem in validating the measurement indicators. Validation is required to ensure indicators are measuring the intended target state, and not some other object. While indicators can usually be validated through looking for correlation between target and indicator under controlled manipulations, this is not possible when the target state is not directly accessible. In this paper, I outline a four-step approach using the concept of robustness, that can help with validating indicators of subjective animal welfare. Gain of Function Research and Model Organisms in Biology 03:45PM - 05:45PM
Presented by :
Nicholas Evans, University Of Massachusetts, Lowell
Charles Pence, Université Catholique De Louvain In this paper we examine “gain of function” (GOF) research in virology, which results in a virus that is substantially more virulent or transmissible than its wild antecedent. We examine the typical animal model, the ferret, arguing that it does not easily satisfy potential desiderata for an animal model. We then discuss how these epistemic limitations bear on practical and policy questions around the risks and benefits of GOF research. We conclude with a reflection on how philosophy of science can contribute to policy discussions around the risks, benefits, and relative priority of particular life sciences research. The Epistemological & Conservation Value of Biological Specimens 03:45PM - 05:45PM
Presented by :
Derek Halm, University Of Utah Natural history collections are repositories of diverse information, including collected and preserved biological specimens. These specimens are sometimes integrated into conservation decision-making, where some practitioners claim that specimens may be necessary for conservation. This is an overstatement. To correct this, I engage with the current literature on specimen collection to show that while specimens have epistemic shortcomings, they can be useful for conservation projects depending on the background or shared values of scientists and decision-makers. This modest approach acknowledges that specimens provide a unique information channel while demarcating where and when values intercede into conservation planning. | ||
03:45PM - 05:45PM Forbes | Methodology and Measurement Speakers
Cyrille Imbert, Archives Poincaré, CNRS - Université De Lorraine
Miguel Ohnesorge, PhD Student, Department Of History And Philosophy Of Science,University Of Cambridge
David Waszek, Post-doctoral Researcher, CNRS
Caterina Marchionni, University Of Helsinki
Jaakko Kuorikoski, Presenter, University Of Helsinki
Matteo Colombo, Reviewer, Tilburg
Moderators
Jennifer Jhun, Reviewer, Duke The Epistemic Privilege of Measurement: Motivating a Functionalist Account 03:45PM - 05:45PM
Presented by :
Miguel Ohnesorge, PhD Student, Department Of History And Philosophy Of Science,University Of Cambridge Philosophers and metrologists have refuted the view that measurement’s epistemic privilege in scientific practice is explained by its theory-neutrality. Rather, they now explicitly appeal to the role that theories play in measurement. I formulate a challenge for this view: scientists sometimes ascribe epistemic privilege to measurements even if they lack a shared theory about their target quantity, which I illustrate through a case study from early geodesy. Drawing on that case, I argue that the epistemic privilege of measurement precedes shared background theory and is better explained by its pre-theoretic function in enabling a distinctive kind of inquiry. Are larger studies always better? Sample size and data pooling effects in research communities 03:45PM - 05:45PM
Presented by :
David Waszek, Post-doctoral Researcher, CNRS
Cyrille Imbert, Archives Poincaré, CNRS - Université De Lorraine The persistent pervasiveness of small studies in empirical fields is regularly deplored in scientific discussions. Taken individually, higher-powered studies are more likely to be truth-conducive. However, are they also beneficial for the wider performance of truth-seeking communities? We study the impact of sample sizes on collective exploration dynamics under ordinary conditions of resource limitation. We find that large collaborative studies, because they decrease diversity, can have detrimental effects in realistic circumstances that we characterize precisely. We show how limited inertia mechanisms may partially solve this pooling dilemma and discuss our findings briefly in terms of editorial policies. Evidential variety and mixed methods research in social science 03:45PM - 05:45PM
Presented by :
Jaakko Kuorikoski, Presenter, University Of Helsinki
Caterina Marchionni, University Of Helsinki Mixed methods research - the combination of qualitative and quantitative data within the same research design to strengthen causal inference - is gaining prominence in the social sciences, but its benefits are contested. Social scientists and philosophers have sought to cash out the epistemic rationale of mixed-methods research but none of the available accounts adequately captures the epistemic gains of mixing methods within a single research design. We argue that what matters is variety of evidence, not of data or methods, and that there are distinct epistemic principles grounding the added value of variety of evidence for causal inference. Scientific credit and the Matthew effect in neuroscience 03:45PM - 05:45PM
Presented by :
Matteo Colombo, Reviewer, Tilburg According to the Matthew effect, scientists who have previously been rewarded are more likely to be rewarded again. Although widely discussed, it remains contentious what explains this effect and whether it's unfair. Using data about neuroscientists, we examine three factors relevant to clarifying these issues: scientists’ fecundity in supervision, H-index and the location where they obtained a PhD. We find a correlation between location and H-index, but no association between fecundity and H-index. This suggests the Matthew effect entrenches status hierarchies in the scientific credit system not because of exploitative supervisors but because of lucky geographical factors. | ||
03:45PM - 05:45PM Smithfield | Health and Pandemic Policy Speakers
Mathias Frisch, Presenter, Leibniz Universität Hannover
Lucie White, Utrecht University
Seth Goldwasser, Doctoral Candidate In Philosophy, University Of Pittsburgh Department Of Philosophy
Arjun Devanesan, King’s College London
Moderators
Craig Callender, University Of California, San Diego Use and Misuse of Models in Pandemic Policy Advice 03:45PM - 05:45PM
Presented by :
Mathias Frisch, Presenter, Leibniz Universität Hannover I defend the use of early Covid-19 models in support of social distancing measures against criticisms. Paying close attention to the epistemology of scientific modeling and to what is required of models for the purpose of underwriting precautionary reasoning suggests that epidemiological models were adequate for the purpose of supporting social distancing measures as a form of precaution. Decision-Making Under Uncertainty: Precautionary Reasoning, Pandemic Restrictions and Asymmetry of Control 03:45PM - 05:45PM
Presented by :
Lucie White, Utrecht University The precautionary principle is often put forward as potentially useful guide to avoiding catastrophe under conditions of uncertainty. But finding an adequate formulation of the principle runs into a problem when needed precautionary measures also have potentially catastrophic consequences – the imperative to avoid catastrophe appears to recommend both for and against the measures. Drawing from the early pandemic, we suggest a way around this “problem of paralysis”: We should recognize and incorporate an asymmetry between our options, based on whether there is a possibility of intervening later to prevent the worst outcome. Finding Normality in Abnormality: On the Ascription of Normal Function to Cancer 03:45PM - 05:45PM
Presented by :
Seth Goldwasser, Doctoral Candidate In Philosophy, University Of Pittsburgh Department Of Philosophy Cancer biology features ascriptions of normal function to cancer. Normal functions are activities that parts of systems, in some minimal sense, should perform. Cancer biologists’ ascriptions pose difficulties for two main approaches to normal function, leaving a gap in the literature. One approach claims that normal functions are activities that parts are selected for. However, some parts of cancers have normal functions but aren’t selected to perform them. The other approach claims that normal functions are part-activities that are typical for the system and contribute to survival/reproduction. However, cancers are too heterogeneous to establish what’s typical across a type. Mereology of pregnancy - an immunological perspective 03:45PM - 05:45PM
Presented by :
Arjun Devanesan, King’s College London Elselijn Kingma (2018, 2019) argues that the popular view that the foetus is merely contained by the mother is inconsistent with the biology of pregnancy. Instead, she argues that the foetus is a part of the mother based on various physiological criteria. I argue that immune tolerance, a criterion of parthood for Kingma, cannot be a criterion for spatiotemporal parthood because it is nontransitive and symmetrical while spatiotemporal parthood is transitive and antisymmetrical. However, it is clear that the relation is stronger than containment. So, I propose that the foetus and mother overlap - what Finn (2021) calls the Overlap View. | ||
03:45PM - 05:45PM Sterlings 1 | Philosophy of Physics: astronomy and cosmology Speakers
Siddharth Muthukrishnan, Graduate Student, University Of Pittsburgh HPS
Siyu Yao, Indiana University Bloomington
Juliusz Doboszewski, University Of Bonn
Jamee Elder, Harvard University
Niels Linnemann, University Of Bremen
Moderators
Caspar Jacobs, Presenter, Merton College, University Of Oxford Unpacking Black Hole Complementarity 03:45PM - 05:45PM
Presented by :
Siddharth Muthukrishnan, Graduate Student, University Of Pittsburgh HPS Black hole complementarity is an influential set of ideas that respond to the black hole information paradox. Unpacking this literature, I argue that black hole complementarity is about the consistency of quantum characterizations of an evaporating black hole and I delineate two consistency claims—i.e., two principles of black hole complementarity: operational complementarity and descriptive complementarity. A series of thought experiments in the physics literature on black hole complementarity gives us strong reasons to adopt the operational principle and reject the descriptive principle. Consequently, if we can stomach operationalism, then operational complementarity may suffice to resolve the black hole information paradox. Excavation in the Sky: Historical Inferences in Astronomy 03:45PM - 05:45PM
Presented by :
Siyu Yao, Indiana University Bloomington Astronomy shares many similarities with historical sciences: the reconstruction of token events, the lack of manipulation, and the reliance on traces. I highlight two benefits of viewing astronomy as a historical science. First, the methodology of historical sciences constitutes a more sufficient description of how astronomers study token events and regularities. Second, how astronomers identify traces of past events offers a more delicate understanding of what traces are. The identification of traces is only gradually achieved through iterations between data-driven approaches and theory-driven approaches, together with the cross-validation between multiple relevant historical events and between diverse datasets. How Theory-laden are Observations of Black Holes? 03:45PM - 05:45PM
Presented by :
Jamee Elder, Harvard University
Juliusz Doboszewski, University Of Bonn In this paper, we assess the extent to which contemporary observations of black holes---particularly those of the LIGO-Virgo and Event Horizon Telescope Collaborations---are ``theory laden''. General relativistic assumptions enter into the methods of both experiments through the use of simulations of black hole spacetimes. This includes numerical relativity simulations in the case of LIGO-Virgo and general relativistic magnetohydrodynamic simulations in the case of the Event Horizon Telescope. We argue that simulations play an ``ampliative'' role in both experiments, and that this role is problematically circular in the former case, but not the latter. GR as a classical spin-2 theory? 03:45PM - 05:45PM
Presented by :
Niels Linnemann, University Of Bremen The spin-2 view on GR has been extremely influential in the particle physics community, including for the development of string theory. Leaving no doubt on its heuristic value, we argue that a foundationalist spin-2 view of GR, as often tacitly taken up as well, runs into a dilemma: either the spin-2 view is physically incoherent, or it leads to an absurd multiplication of alternative viewpoints on GR — in an immediate clash with accepted standards of theorising, and current knowledge of (quantising) gravity. | ||
03:45PM - 05:45PM Birmingham | Philosophy of Cognitive Science Speakers
Nick Brancazio, University Of Wollongong
Zhexi Zhang, Graduate Student, University Of California, Davis
Vincenzo Crupi, University Of Turin
Arnon Levy, The Hebrew University Of Jerusalem
Moderators
Tim Elmo Feiten, University Of Cincinnati Easy Alliances: The Methodology of Minimally Cognitive Behavior (MMCB) and Basal Cognition 03:45PM - 05:45PM
Presented by :
Nick Brancazio, University Of Wollongong The methodology of minimally cognitive behavior (Beer 1996, 2019, 2020a; Beer and Williams 2009) offers a strategy for evaluating cognitive behaviors that can accommodate a variety of explanatory strategies through the use of toy models. These models can be used to test conceptual frameworks and their adequacy for theory formation. We show that the program of basal cognition (Lyon 2019, Lyon et al. 2021) offers a principled way to understand cognitive behaviors through its integration with biological functions. Using basal cognition and the MMCB in tandem gets us closer to a concrete means of evaluation and comparison of conceptual frameworks. Representational Enactivism 03:45PM - 05:45PM
Presented by :
Zhexi Zhang, Graduate Student, University Of California, Davis In the literature on enactive approaches to cognition, representationalism is often seen as a rival theory. In this paper, I argue that enactivism can be fruitfully combined with representationalism by adopting Frances Egan’s content pragmatism. This representational enactivism avoids some of the problems faced by anti-representational versions of enactivism. Most significantly, representational enactivism accommodates empirical evidence that neural systems manipulate representations. In addition, representational enactivism provides a valuable insight into how to identify representational content, especially in brainless organisms: we can identify representational content by investigating autopoietic processes. A critique of pure Bayesian cognitive science 03:45PM - 05:45PM
Presented by :
Vincenzo Crupi, University Of Turin Bayesian approaches to human cognition have been extensively advocated in the last decades, but sharp objections have been raised too. We outline a diagnosis of what has gone wrong with prevalent strands of Bayesian cognitive science (pure Bayesian cognitive science), relying on selected illustrations from the psychology of reasoning and tools from the philosophy of science. Bayesians’ reliance on so-called method of rational analysis is a key point of our discussion. We tentatively conclude on a constructive note, though: an appropriately modified variant of Bayesian cognitive science can still be coherently pursued. Do Bayesian Models of Cognition Show That We Are (Epistemically) Rational? 03:45PM - 05:45PM
Presented by :
Arnon Levy, The Hebrew University Of Jerusalem “According to [Bayesian] models” says a recent textbook in cognitive neuroscience, “the human mind behaves like a capable data scientist”. Do they? That is to say, do such models show we are rational? I argue that Bayesian models of cognition, perhaps surprisingly, do not and indeed cannot, show that we are Bayesian-rational. The key reason is that they appeal to approximations, a fact that carries significant implications. After outlining the argument, I critique two responses, seen in recent cognitive neuroscience. One says that the mind can be seen as approximately Bayes-rational, while the other reconceives norms of rationality. | ||
06:00PM - 07:00PM Kings 5 | Past Presidential Address: "Philosophy of the Field, In the Field” Please join us as PSA Past President, Alison Wylie, delivers the Presidential address that she was unable to deliver at PSA2020/21. | ||
07:00PM - 09:00PM Kings 3, 4 | PSA Poster Forum & Reception Format : Poster Abstracts Speakers
Stephen Katz, Washington State University
Timotej Cejka, University Of Chicago
Nuhu Osman Attah, University Of Pittsburgh
Walter Orozco, University Of Cincinnati
Paola Castaño, University Of Exeter
Zachary Mayne, Undergraduate, Montana State University
Laurenz Casser, The University Of Texas At Austin
Zina Ward, Florida State University
Yael Friedman, University Of Oslo
Paul Holtzheimer, Dartmouth College
Adina Roskies, Dartmouth College
Ashley Walton, Dartmouth College
Robert Roth, Dartmouth College
Dzintra Ullis, University Of Pittsburgh
Mara McGuire, University Of Pittsburgh
Colin Allen, University Of Pittsburgh HPS
Brendan Fleig-Goldstein, University Of Pittsburgh
Jean-Philippe Thomas, PhD Student, University Of Montreal
Laura Menatti, University Of The Basque Country
Angarika Deb, Central European University
Joyce Koranteng-Acquah, PhD Student (participant), University Of Exeter
Till Grüne-Yanoff, Royal Institute Of Technology (KTH) Stockholm
Nathanael Sheehan, Researcher , University Of Exeter
Rose Trappes, University Of Exeter
Fotis Tsiroukis, PhD Candidate, University Of Exeter
Michael Goldsby, Washington State University
Andrej Sali, Professor, University Of California, San Francisco
Alessandra Buccella, Chapman University
Sander Beckers, Poster Presenter, Moderator, Question-asker, University Of Tübingen
Clint Hurshman, University Of Kansas
Nick Brancazio, University Of Wollongong
Yoshinari Yoshida, Graduate Student, University Of Minnesota
Patrick McGivern, University Of Wollongong
Marina DiMarco, University Of Pittsburgh HPS
Jared Ifland, Graduate Student And Teaching Assistant, Florida State University
Colby Clark, PhD Student, University Of Kentucky
Jaipreet Mattu, Western University
Molly Kao, University Of Montreal
Daisy Underhill, PhD Student, University Of California, Davis
Michael Massussi, University Of Montreal
David Rattray, University Of Toronto
Jacob P. Neal, Presenter, Western University
Stella Fillmore-Patrick, The University Of Texas At Austin
Nathan Lackey, Graduate Student, University Of Minnesota
Vitaly Pronskikh, Fermi National Accelerator Laboratory
Nathan Gabriel, Non-presenting Co-author, University Of California Irvine
Aja Watkins, PhD Candidate, Boston University
Alan Love, University Of Minnesota
Max Dresow, Postdoctoral Associate, University Of Minnesota
Caitlin Mace, University Of Pittsburgh HPS
Ingo Brigandt, University Of Alberta
Héloïse ATHEA, PhD Student , Institut D'Histoire Et De Philosophie Des Sciences Et Des Techniques (IHPST)
Matthew Coates, University Of California, Irvine
Kati Kish Bar-On, Massachusetts Institute Of Technology
Jun Otsuka, Kyoto University
India Bhalla-Ladd, University Of California, Irvine
Owen Chevalier, Western University
Kathryn Petrozzo, Graduate Student, University Of Utah
Sabrina Hao, University Of Pittsburgh
Rachel Pedersen, University Of Minnesota
Joshua Eisenthal, Research Assistant Professor, Caltech
Andrew Evans, Graduate Student, University Of Cincinnati
GE FANG, Washington University In St. Louis
Daniel Saunders, Presenter, University Of British Columbia
Yu Chung, University Of Pennsylvania
Eden McQueen, Post-doctoral Fellow, University Of Michigan
TJ Perkins, University Of Utah
Matthew Brewer, PhD Candidate, Boston University
Leonardo Bich, Ramón Y Cajal Senior Researcher, University Of The Basque Country
Janella Baxter, Sam Houston State University
Ryan McCoy, Poster Presenter, Session Chair, University Of Kentucky
Rochelle Tractenberg, Georgetown University
Sophia Crüwell, University Of Cambridge
Josh Hunt, Post Doc, Massachusetts Institute Of Technology
Joe Roussos, Institute For Futures Studies
Michael Stoeltzner, University Of South Carolina
Siddharth Muthukrishnan, Graduate Student, University Of Pittsburgh HPS
Rose Novick, University Of Washington
Suzanne Thornton, Symposiast, Swarthmore College
Shimin Zhao, University Of Wisconsin - Madison
Chanwoo Lee, University Of California, Davis
Annelies Pieterman-Bos, PhD Candidate, Utrecht University
Sabina Leonelli, University Of Exeter
Vincent Cuypers, Hasselt University
Siska De Baerdemaeker, Stockholm University
Isaac Record, Michigan State University
Pedro Bravo De Souza, Universidade De São Paulo
Noa Dahan, Western Michigan University
Sloane Wesloh, Graduate Student, University Of Pittsburgh HPS
Matilde Carrera, Boston University
Isaac Davis, Lecturer, Yale University
Abigail Holmes, Univeristy Of Notre Dame
Jacqueline Wallis, PhD Student, University Of Pennsylvania
Viorel Pâslaru, Poster Presenter And Sessions Chair, University Of Dayton
Franziska Reinhard, PhD Researcher, University Of Vienna
Chelsie Greenlee, Graduate Student, University Of Notre Dame
Myron A Penner, Trinity Western University
Amanda J Nichols, Oklahoma Christian University
Vanessa Bentley, University Of Central Oklahoma
Juliana Gutiérrez Valderrama, Universidad De Los Andes
Charlotte Zemmel, PhD Student, University Of Cambridge
Syed AbuMusab, University Of Kansas
Bruce Rushing, University Of California, Irvine
Riana Betzler, San Jose State University
Adam Chin, University Of California, Irvine
Agnes Bolinska, University Of South Carolina
Shahin Kaveh, Poster Presenter, Independent Researcher
Tyler Delmore, York University, Toronto
Samuli Reijula, University Of Helsinki
Philippe Verreault-Julien, Eindhoven University Of Technology
Siyu Yao, Indiana University Bloomington
Inkeri Koskinen, University Of Helsinki
Bixin Guo, University Of Pittsburgh
Sabina Vaccarino Bremner, Assistant Professor, Philosophy, University Of Pennsylvania
Cyrille Imbert, Archives Poincaré, CNRS - Université De Lorraine
Dragana Bozin, University Of Oslo
Chloé De Canson, University Of Groningen
Hannah Rubin, Presenter, University Of Notre Dame
Federica Bocchi, Presenter, Boston University
David Waszek, Post-doctoral Researcher, CNRS
Seán Muller, University Of Johannesburg
Jeffrey Bagwell, Presenter, Northern Virginia Community College
Tanner Leighton, University Of Pittsburgh HPS
Cory Wright, Cal State Long Beach
Ning Shao, Co-author, Cal State Long Beach
Oscar Westerblad, Department Of History And Philosophy Of Science, University Of Cambridge
Stephen Perry, Graduate Student, University Of Pittsburgh Department Of Philosophy
Christopher Cosans, Abstract Presenter, University Of Mary Washington
Jesse Hamilton, PhD Student, University Of Pennsylvania
Lida Sarafraz, Ph.D. Candidate, University Of Utah
Christopher Joseph An, PhD Student, University Of Edinburgh
Paul Franco, University Of Washington
Caleb Hylkema, PhD Student, University Of Utah
Adam Smith, PhD Student, University Of Utah
Julie Schweer, PhD Student, Karlsruhe Institute Of Technology
Michael Begun, University Of Pittsburgh
Michaela Egli, University Of Geneva
Mingjun Zhang, Fudan University
Francisco Pipa, Department Of Philosophy, University Of Kansas
Kardelen Kucuk, PhD Student, Western University
Thomas De Saegher, The University Of Western Ontario
Emma Cavazzoni, University Of Exeter
Michael Miller, Reviewer, University Of Toronto
Patrick Fraser, University Of Toronto
Beckett Sterner, Presenting Author, Arizona State University
True Gibson, Graduate Student, University Of California, Irvine
Robert Kok, University Of Utah
Shereen Chang, Postdoc, University Of Guelph
Hasan Roshan, Washington State University |
Day 3, Nov 12, 2022 | |||
08:00AM - 09:00AM Kings 3, 4 | Jessica Pfeifer PSA2022 Service Breakfast In recognition of the many years of service provided by former Executive Director, Jessica Pfeifer, we would like to welcome all volunteers and session chairs to enjoy breakfast! | ||
08:30AM - 06:00PM Kings Terrace | Nursing Room | ||
08:30AM - 06:00PM Kings Plaza | Childcare Room | ||
08:30AM - 06:00PM Kings Garden 1, 2 | Book Exhibit | ||
09:00AM - 11:45AM Board Room | Consensus and Dissent in Science: New Perspectives Speakers
Kristen Intemann, Presenter, Montana State University
Inmaculada De Melo-Martin, Weill Cornell Medicine--Cornell University
Boaz Miller, Zefat Academic College
Haixin Dang, University Of Nebraska Omaha
Julie Jebeile, Universität Bern
Mason Majszak, Universität Bern
Miriam Solomon, Temple University
Moderators
Daisy Underhill, PhD Student, University Of California, Davis Scientific consensus plays a crucial role in public life. In the face of increasing science denialism, scientists are under pressure to present themselves as a united front to combat misinformation and conspiracy theories. However, the drive for consensus also has negative epistemic consequences, such as masking expert disagreement and obscuring value judgments. There exists widespread agreement among philosophers that dissent plays an important epistemic role in scientific communities. Disagreements among scientists are inevitable in areas of active research and dissent is crucial in facilitating collective inquiry. How should we understand the epistemic role of dissent and determine when it is normatively appropriate? Does scientific consensus have any intrinsic epistemic value? What consensus-generating methods are apt and in which circumstances? The aim of this symposium is to present new research on the social epistemology of consensus and dissent. The papers collected in this symposium address the question of how to balance the epistemic advantages and disadvantages of consensus and dissensus. Through a variety of different case studies, ranging from pandemic policy to medical imaging and climate science, the papers offer different perspectives on how scientists can better communicate disagreement when interfacing with policy makers and the public. Commentary from Miriam Solomon 09:00AM - 11:45AM
Presented by :
Miriam Solomon, Temple University The symposium session, Consensus and Dissent in Science: New Perspectives, will end with a commentary on the papers by Miriam Solomon. Solomon has extensively studied the social epistemology of consensus and dissent. For example, Solomon (2001) criticizes the view that consensus is an aim of, or a regulative ideal for scientific inquiry. According to her, the existence of scientific dissent is normal, and a distribution of different views in the scientific community that is proportional to each view’s relative empirical success is the desirable normative situation. In Solomon (2015), she appreciates the importance of consensus in medicine, and, more specifically, the institution of consensus conferences. Solomon will evaluate the papers in the symposium within the wider context of social epistemic critiques of consensus building in science. Solomon, M. (2001). Social Empiricism. MIT Press. Solomon, M. (2015). Making Medical Knowledge. Oxford University Press. Expert Judgment in Climate Science 09:00AM - 11:45AM
Presented by :
Julie Jebeile, Universität Bern
Mason Majszak, Universität Bern Consensus is often regarded as an important criterion for laypeople or decision-makers to arbitrate between the opinions of experts. Other criteria include tracking record and unbiasedness of experts, as well as validity of evidence and soundness of arguments. Overall, these criteria aim to ensure that expert judgment is grounded in objective arguments and is not a mere subjective belief or expression of interests from experts. In particular, consensus is supposed to guarantee a certain intersubjectivity. In this paper, we argue that the subjective aspects of expert judgment, such as intuitions and values, which consensus and the other criteria are supposed to counteract, actually bestow epistemic power upon those judgements. For that, we explore the role of expert judgment in climate science. We show that expert judgment can be found throughout the scientific process, in model creation and utilization, model evaluation, data interpretation, and ultimately ending with the quantification and communication of uncertainties to policy-makers. We argue that expert judgment is used for the purpose of supplementing models and managing uncertainty. First, as no model can perfectly represent the target, expert judgment is used as an alternative cognitive resource in order to provide climate projections and associated probabilities. Second, expert judgment is used as a means for quantifying epistemic uncertainty surrounding both general theories and specific scientific claims. This is shown through the IPCC’s use of confidence and likelihood metrics for evaluating uncertainty. We further highlight that the production of an expert judgment is more epistemically opaque than computer simulations, as this production is partly internal, mental and thereby non-accessible. How then to justify that expert judgment can still supplement models and manage uncertainty? A pessimistic view would answer that expert judgment is simply a last resort: facing high uncertainty, one has no other choice than appealing to expert judgment. An optimistic view would rather recognize that there is some quality in expert judgment that makes it a precious cognitive resource. We contend that this quality stands in its subjective aspects. First, we argue that the trustworthiness of an expert judgment bears down on the expert being exceptionally well-informed, a not interchangeable rational agent, due to their education and professional experience; with experience comes tacit knowledge, and thereby insight and to some extent intuition. Second, we argue that, while values are possible sources of scientific disagreement, if we were to remove their influence from the expert judgments, we would be left in the same position as we started, a wealth of uncertainty and no practical way to overcome the challenge. Experts would indeed be left as mere databases, where information would be an input stored for later recall. Furthermore, under specific circumstances we define, value differences within an elicited group of experts can provide the condition of independence for rational consensus, as aimed by the elicitation methods which score, combine and aggregate expert judgements into structured judgments in the IPCC reports. Minority Reports: Registering Dissent in Science 09:00AM - 11:45AM
Presented by :
Haixin Dang, University Of Nebraska Omaha Consensus reporting is valuable because it allows scientists to speak with one voice and offer the most robust scientific evidence when interfacing with policymakers. However, what should we do when consensus does not exist? In this paper, I argue that we should not always default to majority reporting or consensus building when a consensus does not exist. Majority reporting does not provide epistemically valuable information and may in fact further confuse the public, because majority reporting obscures underlying justifications and lines of evidence, which may in fact be in conflict or contested. Instead, when a consensus does not exist, I argue that minority reporting, in conjunction with majority reporting, may be a better way for scientists to give high quality information to public. Through a minority report, scientists will be able to register dissenting viewpoints and give policymakers a better understanding of how science works. For an instructive epistemic model of how minority reports may work, I turn to an analogy with the U.S. Supreme Court. The court issues majority opinions, which are legally binding, and dissenting opinions when there exists significant divergence in views. The dissenting opinion is epistemically valuable in several ways (Ginsburg 2010). The dissent can help the author of the majority opinion clarify and sharpen her own reasoning, therefore increasing the quality of reasoning of the court in general. By laying out a diverging set of legal reasoning, the justice allows future legal cases to be brought and worked on using these diverging reasonings (Sunstein 2014). Furthermore, justices may also write concurring opinions when they agree with the ruling but for different legal reasons. I argue that this epistemic model of the Supreme Court which allows for minority and concurring reports can be extended to science. As scientific societies and expert panels are increasingly being called to produce consensus or majority reports to guide policy, these groups need an epistemic mechanism to register dissent on issues where there exists no strong consensus. While the majority report should be taken with the most weight, minority reports can shed light on underlying reasonings and value judgments that would otherwise be hidden in a majority or consensus report. If our goal in asking scientists for their guidance is to receive high-quality information on which to make decisions, then we should allow for minority reporting as a mechanism to gain a deeper understanding of the state of the science. Finally, I address some objections. The most pressing of which is that minority reporting may be particularly sensitive to capture by elites or special interests that seek to undermine public action. I argue for mechanisms that can limit the capture of dissenting voices by outside interests. Ginsburg, R. B. (2010). The role of dissenting opinions. Minnesota Law Review, 95, 1. Sunstein, C. R. (2014). Unanimity and disagreement on the Supreme Court. Cornell Law Review, 100, 769. Algorithmically Manufactured Scientific Consensus 09:00AM - 11:45AM
Presented by :
Boaz Miller, Zefat Academic College Scientists have started to use algorithms to manufacture a consensus from divergent scientific judgments. One area in which this has been done is the interpretation of MRI images. This paper consists of a normative epistemic analysis of this new practice. It examines a case study from medical imaging, in which a consensus about the segmentation of the left ventricle on cardiac MRI images was algorithmically generated. Algorithms in this case performed a dual role. First, algorithms automatically delineated the left ventricle – alongside expert human delineators. Second, algorithms amalgamated the different human-generated and algorithm-generated delineations into a single segmentation, which constituted the consensus outcome. My paper analyses the strengths and weaknesses of the process used in this case study, and draws general lessons from it. I analyze the algorithms that were used in this case, their strengths and weaknesses, and argue that the amalgamation of different human and non-human judgments contributes to the robustness of the final consensus outcome. Yet in recent years, there has been a move away from relying on multiple algorithms for analyzing the same data in favour of sole reliance on machine learning algorithms. I argue that despite the superior performance of machine learning algorithms compared to other types of algorithms, the move toward sole reliance on them in cases such as this ultimately damages the robustness and validity of the final outcome reached. This is because machine-learning algorithms are prone to certain kinds of errors that other types of algorithms are not prone to (and vice-versa). A central apparent motivation for this project and others like it is anxiety regarding the existence of disagreements over the segmentation of the same image by different human experts. At the same time, the consensus-generating method in this case and other like it faces difficulties handling—in a epistemically satisfying way—cases in which the experts’ judgments significantly diverge from one another. I argue that this difficulty stems from a strive to always reach a consensus, which follows from an unjustified tacit assumption that there should be just one correct segmentation. I argue that different legitimate delineations of the same data may be possible in some cases due to different weighings of inductive risks or different contextually appropriate theoretical background assumptions. Consensus-generating algorithms should recognize this possibility and incorporate an option to trade off values against each other for the sake of reaching a contextually appropriate outcome. On Masks and Masking: Epistemic Injustice and Masking Disagreement in the COVID-19 Pandemic 09:00AM - 11:45AM
Presented by :
Kristen Intemann, Presenter, Montana State University
Inmaculada De Melo-Martin, Weill Cornell Medicine--Cornell University We have previously argued that masking, censoring, or ignoring scientific dissent can be detrimental for several ethical and epistemic reasons, even when such dissent is considered to be normatively inappropriate (de Melo-Martín and Intemann 2018). Masking dissent can be inappropriately paternalistic, undermine trust in experts, and make effective policy debates less fruitful. Here we explore another concern. Focusing on the case of communication about scientific information during the COVID-19 pandemic, we examine the extent to which masking disagreements among experts can result in epistemic injustices against laypersons. In an emerging public health crisis, uncertainties are high and public policy action is urgently needed. In such a context, where both policymakers and members of the public are looking to scientific experts to provide guidance, there is a great temptation for experts to “speak with one voice,” so as to avoid confusion and allow individuals, governments, and organizations to make evidence-based decisions rapidly (Beatty 2006). Reasonable and policy-relevant disagreement were masked during the pandemic in two central ways. First, scientific information with respect to particular interventions was presented in ways that masked the role of value judgments, about which disagreements existed. Interventions were thus presented as following directly from the scientific evidence. For example, decisions about whether to lockdown countries, what degree of lockdown to implement, and for how long depend not only on scientific evidence about the severity of Covid-19 but on ethical, social, or political judgments about, among other things, the importance of human life and health, the significance of civil liberties, the relevance of the financial recovery, the distribution of risks, and the proper role of government. When the policies were presented as following directly from the science, the role of value judgements in reaching conclusions was obscured. This denied laypersons the opportunity to assess how alternative value judgments might have led to different conclusions. In other words, it denied them rational grounds for objecting to, or following policies that may depend on value judgments. Second, disagreements about empirical data in assessing the efficacy or safety of interventions were also masked. For example, concerns about the consequences that minimizing the risks to some populations could have for the public willingness to follow recommendations, led to an overemphasis on risks to children and with it, to school closures. In these cases, masking scientific disagreement about empirical claims can deny decisionmakers access to contextualizing information that can be helpful in assessing risks that could (or could not) be reasonably taken or imposed on others. We conclude by drawing some lessons for how scientists and public health officials might communicate more effectively in circumstances where there are significant uncertainties and urgent need for action. Beatty, J. (2006). Masking disagreement among experts. Episteme, 3(1-2), 52-67. de Melo-Martín, I. and Intemann, K. (2018) The Fight Against Doubt: How to Bridge the Gap Between Scientists and the Public. New York, Oxford University Press. | ||
09:00AM - 11:45AM Forbes | Evolutionary Transitions in Individuality: Moving Beyond Fitness-Based Approaches Speakers
James Griesemer, Presenter, University Of California, Davis
Peter Takacs, Research Fellow, The University Of Sydney
Guilhem Doulcier, Macquarie University
Pierrick Bourrat, Macquarie University
Moderators
Carol Cleland The origins of individuality in evolution has been a major topic both in evolutionary biology and philosophy of biology over the past 30 years. New levels of individuality are the outcomes of successive processes known as evolutionary transitions in individuality (ETIs). Arguably, the most influential models of ETIs place fitness at the center of the explanation, whereby fitness is supposedly transferred from a lower to a higher level of organization during an ETI. However, recent philosophical and formal arguments have called this "transference of fitness" into question. These critiques, together with recent experimental work, have prompted the development of new approaches that look beyond fitness to the evolution of the traits that underpin ETIs and the role of ecological conditions. This symposium brings together philosophers of science, theoretical biologists, and experimentalists to rethink the conceptual landscape of ETIs in light of the latest developments in experimental and theoretical biology. Eco-developmental Scaffolding in Evolutionary Transitions: Working to Make Constraints on Developmental Reaction Norms 09:00AM - 11:45AM
Presented by :
James Griesemer, Presenter, University Of California, Davis In this speculative talk, I'm going to "think" adjacently with Stuart Kauffman's recent work on what he calls "the adjacent possible" in biological systems (Kauffman 2019). My aim is to articulate a way of thinking about the role of "environments" and behavior as the leading edge of evolutionary transitions (here: transitions in both individuality and inheritance, in a particular sense). Kauffman articulates an interesting thesis on what makes a living system: that it must be self-reproducing (in a particular sense) and carry out at least one "work cycle" (again in a particular sense). Kauffman muses that we lack mathematical theories to articulate what he considers the heart of the "problem" with the evolution of such systems -- because their operation changes their own configuration spaces, in a sense there can be no mathematical theory in the conventional sense of dynamical systems theory (which presupposes a fixed configuration space to get the math off the ground). I agree, this is a hard problem for a science of organized, living agents. I think there may be an adjacent "less hard" problem. Kauffman observes that for the kind of living organization he discusses "it takes work to make constraints and it takes constraints to make work." I speculate that the less hard problem can be formulated by considering the production of constraints through processes of ecological scaffolding. The problem is quite as open as Kauffman’s general problem of open “niches,” but it is less hard in the sense that there may be systematic ecological patterns of developmental scaffolding that allows us to study some highly limited problems of evolution into the adjacent eco-developmental possible. I speculate that these scaffolding interactions can lead to the development of configuration spaces, hence eco-developmental scaffolding introduces novelty into development from an environmental source. The talk will link this idea to what I have called “developmental reaction norms” in contrast to standard “ecological reaction norms” (Griesemer 2014). Differently put, step-wise nearby (adjacent) changes in configuration space can be studied by looking for systematic patterns of scaffolding constraints for these local phenomena of constraint production via scaffolding work, rather than by taking on Kauffman's completely general mathematical problem and challenge. I have no illusions that this will solve Kaufmann's hard problem, but there might be some clues to the kinds of mathematics we could be looking for and the kinds of phenomena that might present less-steep empirical challenges than developing a completely new mathematics for such completely general, open empirical problems as the origins of life or the evolution of the biosphere. From Fitness-Centered to Trait-Centered Explanations in Evolutionary Transitions in Individuality: Prospect for Reconciliation? 09:00AM - 11:45AM
Presented by :
Peter Takacs, Research Fellow, The University Of Sydney A popular account of evolutionary transitions in individuality (ETIs) postulates a crucial change in the nature of fitness during an ETI. Fitness at the collective level is supposedly “transferred” or “decoupled” during the process (Michod, 2005; Okasha, 2006). Recently, this view of ETIs has been challenged on the grounds that it may be better to focus directly on traits as opposed to fitness (Bourrat et al., 2021). In this paper, I will attempt to reconcile these two views by attending to their distinct conceptions of fitness. Following one account, fitness is considered a complex trait whose measurement involves summing over the totality of an entity’s phenotypic traits (Brandon, 1990, Bouchard & Rosenberg, 2004). Following the alternative account, fitness is the long-term reproductive output of an entity (Sober, 2001). If one adopts the former approach to fitness, I will argue that it becomes possible to understand how fitness changes during an ETI. When fitness is a complex trait composed of all the other traits of an entity, it is unsurprising that its nature changes during a transition. This follows from the fact that the selectively relevant traits of individuals often change during a transition. For instance, during the transitions from unicellular to multicellular organisms, trade-offs between different traits of the unicellular organism no longer apply when they become part of a collective. However, this does not correspond to a “transfer” of fitness. It is simply a change in the relationship between traits and their environment(s). Further, it seems that this change in the nature of fitness is not reconcilable with the competing view of fitness as the long-term number of descendants. Evolutionary Transitions in Individuality, Traits and Eco-Evo-Devo: The Life Cycle as a Unifying Perspective. 09:00AM - 11:45AM
Presented by :
Guilhem Doulcier, Macquarie University Evolutionary transitions in individuality (ETIs) are often conceptualized in a static rather than dynamical way. Abstractly, once an ETI is complete, the particles or lower-level entities (e.g., genes or cells) are regarded as the “bricks” constituting the “building” of the higher-level entities or collective (e.g., chromosomes or multicellular organisms). However, this static view underplays the dynamical nature of the collective—results of interactions between lower-level entities are only described in a phenomenological way. This is particularly detrimental when studying ETIs. We propose a different view in which both particles and collectives have their own developmental (internal) and ecological dynamics (external). One subtlety of this view is that the dynamics at both levels are intertwined in nested systems: a process described as the ecology of the cells is the development of the multicellular organism. Likewise, the development of the cells constrains their ecology and, ultimately, limits the space that can be explored by collectives. In this paper, we will focus on how regarding nested systems from the point of view of their life cycle, as a central object for the study of ETIs, can help clarify the problem of nested dynamics. First, we will show under what conditions a dynamical system can qualify as a life cycle by drawing from the Darwinian properties framework and, in particular, Godfrey-Smith’s (2009) Darwinian space. Second, we will show how focusing on life cycle rather than the entity at one point of the life cycle permits us to clarify the problems of entangled timescales and fuzzy boundaries, particularly in the context of the ecological scaffolding scenario of ETIs (Black et al., 2020; Doulcier et al., 2020). Third, we will contrast this view with some alternative formalizations of the ETIs. Finally, we will outline a research program detailing how this focus could be used to tackle outstanding questions in the field. The Role of Ecology in Evolutionary Transitions in Individuality 09:00AM - 11:45AM
Presented by :
Pierrick Bourrat, Macquarie University During an evolutionary transition in individuality (ETI), lower-level entities interact in such a way that they produce higher-level entities that become new units invoked in evolutionary explanation at this higher level (Michod, 2005; Okasha, 2006). In this paper, we will argue that to understand an ETI, it is crucial to first understand the type of ecological conditions under which the formation of higher-level entities can occur. For instance, from the perspective of a cell or, more abstractly, a particle, one will ask under what environmental conditions is it advantageous to become part of and possibly fully dependent on a larger entity such as a multicellular organism or, more abstractly, a collective? This leads to a view of ETIs in which being part of a larger entity is regarded as a potential strategy or phenotype from the point of view of the lower level. Starting from recent works in experimental and theoretical evolution (e.g., Black et al., 2020; Hammerschmidt et al., 2014, Bourrat et al., 2021), we will provide boundary conditions on the environment for ETIs to be possible. First, we will argue that the environment must be complex, where complexity is defined by the number of discrete states and an associated probability distribution, which can be connected to the notion of entropy in information theory. Second, we will argue that these states must have a sequential order that can be defined either spatially or temporally. Finally, we will argue that the difference between sequential states must be relatively smooth because large differences between sequential states would exceed the adaptive capacity of the entities undergoing the transition. | ||
09:00AM - 11:45AM Duquesne | New Directions in the Science of Structural Oppression Speakers
Morgan Thompson, Universität Bielefeld
Mikio Akagi, Associate Professor Of The History And Philosophy Of Science, Texas Christian University
Shen-yi Liao, University Of Puget Sound
Daniel James, Heinrich-Heine-Universität Düsseldorf
Tereza Hendl, The University Of Augsburg And Ludwig-Maximilians-University Of Munich
Moderators
Qiu Lin, Duke University Social oppression is generally understood to be "structural": formal and informal rules and common patterns of interaction cause disparate and inequitable outcomes for members of certain social groups. However, it is common-especially in psychology and some philosophical subfields-for work to focus narrowly on features of individuals or interpersonal interactions. This symposium brings together four papers on the science of structural oppression that aim, in different ways, (1) to diagnose the causes of this tendency to examine interpersonal rather than structural phenomena, (2) to identify barriers to studying structural oppression, and (3) recommend new avenues of research that embrace the structural character of oppression. Thus, we examine experimental measures of discrimination, hypotheses about conditions that make microaggressions harmful, and how oppression can be facilitated by artifacts and by categorization choices in demography. This symposium will present new work that grapples with the structural character of oppression rather than its personal or interpersonal manifestations, and bring attention to recent work from various disciplines that does the same. We also hope to model a valuable kind of work in philosophy of science that supports a broader project of inquiry through interdisciplinary engagement and constructive, good-faith criticism. Who Counts in Official Statistics? 09:00AM - 11:45AM
Presented by :
Daniel James, Heinrich-Heine-Universität Düsseldorf
Tereza Hendl, The University Of Augsburg And Ludwig-Maximilians-University Of Munich Debates about racism and calls for racial equality have recently surged. This shift is reflected in the EU’s expansion of its anti-discriminatory policies to include race and ethnicity as categories. To determine the extent of racial/ethnic discrimination and the success of ‘positive action’ measures, the EU recommends the collection of statistical data. Unlike the systematic investigation of racial disparities in the UK and the US, in most European countries, ‘race’ and ‘ethnicity’ are not used as statistical categories in comprehensive data collection. In Germany, reservations towards gathering racial/ethnic data and even the very term ‘Rasse’ are deep-seated due to the history of National Socialism. Instead, categories such as ‘migration background’ are used. We argued in previous research that collecting racial/ethnic data is crucial to map patterns of multi-layered disparities and discrimination, inequalities and vulnerability and identify effective mitigation strategies. Building on this work, we argue that the category of “migration background” is both ethically and epistemically unsuited for this task and explore alternative approaches. First, we draw on accounts of ethical-epistemic analysis (Tuana, 2010, 2013; Katekireddi & Valles 2015; Valles forthcoming), as well as social-scientific research to argue that the category of 'migration background' is both epistemically and ethically problematic. As Aikins et al. (2020) point out, it does not capture the putative racial discrimination of some racialised groups. For example, black Germans whose parents both have German citizenship by birth have no ‘migration background’ (https://afrozensus.de/reports/2020/). This suggests that demographic categories such as ‘migration background’ can render certain social groups invisible (Will, 2019). The stakes are high as the current demographic categories disallow investigating to which extent structural racism is a causal factor explaining racial disparities related to, e.g., health outcomes in the context of COVID-19 (Plümecke et al., 2021). Second, we draw on debates on the metaphysics of race and ethnicity to examine alternatives to ‘migration background’. Drawing on an ongoing experiment study (James et al., in progress), we provide a comparative analysis of race talk in the US and Germany. We address concerns raised by the use of racial/ethnic categories related to, e.g., privacy, and reflect on the controversy regarding the meaning of the German term for race (‘Rasse’). One metaphysical worry is whether adopting racial categories in official statistics commits us to the view that races are biologically real, a view widely held to be refuted. However, we argue that, while talk of ‘racialised groups’ may be preferable in most (including social-scientific) contexts, cautious talk of ‘Rasse’ is permissible in others. In particular, the latter may be suited to publicly communicate how racial discrimination shapes material and (physical and mental) health conditions, socioeconomic positions and overall well-being in racialised people (Hendl, Chung and Wild, 2020). Thus, we explore how conceptual ethics can inform the social-scientific and public debate over racial/ethnic classification and thereby facilitate racial/ethnic data collection that is both ethically and epistemically sound. Materialized Microaggression 09:00AM - 11:45AM
Presented by :
Shen-yi Liao, University Of Puget Sound Microaggressions, as defined by psychologist Derald Wing Sue, are “the brief and commonplace daily verbal, behavioral, and environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial, gender, sexual-orientation, and religious slights and insults to the target person or group” (Sue 2010: 5; see also Sue et al 2007). Amongst these three different mechanisms, environmental microaggression is most rarely discussed and most poorly understood. On Regina Rini’s (2021: 21) characterization “Environmental microaggressions are distinctive in that they don’t involve any particular perpetrator. They are background facts that regularly confront marginalized people with casual disregard or disdain.” Sue’s own examples tend to primarily involve social arrangements, and his own analyses tend to primarily be about messages communicated. For example, he talks about his own experience, as a non-white person, going into a room of university administrators, all of whom are white, and how this monochromatic scene sends the message “You and your kind are not welcome here.” (Sue 2010: 25–26). However, I contend that the background facts that regularly confront marginalized people are often not only social, but also material. For example, automatic soap dispensers that work less well for darker-skinned users also realize environmental microaggressions. Such objects and spaces are not merely biased against some class of users, but a downstream consequence of injustice toward an oppressed group and an upstream antecedent of further injustices. That is, they are oppressive things that don’t simply reflect or reveal past injustices, they also perpetuate it: in particular, they do so because—on a broadly 4E cognition perspective—our thoughts and actions causally depend on, and may even be partially constituted by, our cognitive environment (Liao and Huebner 2021). My proposal can be thought of as a generalization of Alison Reiheld’s (2020) account of microaggressions as a disciplinary technique for fat bodies. Specifically, Reiheld gives examples of furnitures that are not built to fit fat bodies as examples of such environmental microaggressions. But the world is also full of similarly materialized microaggressions against oppressed groups along axes of race, gender, disability, etc. and their intersections. These objects and spaces have the same epistemic profile as verbal and behavioral microaggressions. People in the oppressive group tend to not notice the existence of such materialized microaggressions. But even people in the oppressed group tend to not notice their systematicity: that is, they might be inclined to explain away individual interactions as isolated incidents. In particular, the connections between materialized microaggressions and other manifestations of oppression remain opaque. Taking seriously the materiality of environmental microaggressions also challenges Sue’s own understanding of the concept. While he focuses on social arrangements and symbolic communication, a materialist conception of environmental microaggressions emphasizes their role in scripting thoughts and actions. To address environmental microaggressions, we do not need training sessions with consultants; instead, we need to gradually, but quite literally, remake the world. A Structural Microaggression Concept for Causal Inquiry 09:00AM - 11:45AM
Presented by :
Mikio Akagi, Associate Professor Of The History And Philosophy Of Science, Texas Christian University Microaggressions have received increasing attention in recent decades because, although individually they may seem minor, they are hypothesized to have significant harmful psychological and social effects in aggregate. However, correct usage of the term “microaggression” is contested; authors across disciplines defend a variety of inconsistent accounts. Psychologists, moral philosophers and other scholars (e.g. Sue et al., 2007; Rini, 2020; McTernan, 2018; Pérez Huber and Solorzano, 2015) construct definitions or glosses in service of their varied investigative aims, which include the assessment of moral responsibility, near-term institutional reform, and the construction of anti-oppressive phenomenologies. However, I argue that these accounts are not so well-suited to guide empirical research about how microaggressions cause certain social ills. I propose a pluralist account of microaggressions that builds on these extant accounts while facilitating causal-explanatory inquiry. Public health researchers (e.g. Gee and Ford, 2011) have found that inequitable structural outcome gaps—like the U.S. racial health gap, racial and gendered income gaps across the world, gaps in mental health outcomes, etc.—are not fully explained by correlated factors like socioeconomic status. Microaggressions are thought to contribute to these outcome gaps, but there are outstanding questions about precisely what causal role microaggressions play. Let the “explanatory project” be the effort to answer these questions. I recommend a pluralistic causal role account, that microaggressions are whatever interpersonal and institutional factors explain the outcome gaps. I identify a number of independent (but not mutually exclusive) hypotheses about mechanisms that might fulfill this causal role, including attributions to prejudice, attributional ambiguity, plausible deniability of discrimination, and implicit bias. These hypotheses correspond to emphases in various extant accounts of microaggressions, and suggest distinct kinds of interventions. While a priori disputes about these accounts can serve some investigative aims, the explanatory project can only be resolved satisfactorily through future empirical work. Most extant accounts of microaggression are poorly suited to the explanatory project, because they are crafted to accommodate, rather than to overcome, our present epistemic limitations. In particular, most extant accounts take as given various social controversies or gaps in our understanding of microaggressions and the causal role they play in individual well-being and structural oppression. Rini (2020) objects to “structural” accounts like mine on the ground that they lack epistemic humility; in particular that they are inconsistent with attractive versions of standpoint epistemology. I argue that Rini’s arguments presuppose her project—assessing moral responsibility—and her discussion artificially forecloses empirical possibilities because of epistemic constraints that apply to individuals, but not to research programs. It is possible that when we better understand the mechanisms that cause outcome gaps, we may decide that they are so diverse that they do not merit being grouped under a common label. That is, my account may eventually be self-eliminating. But I do not recommend the wholesale elimination of the term “microaggression,” since the term can still function as a tool outside of the explanatory project, e.g. in phenomenological or moral accounts. Path Dependence in Psychological Measures of Racial Discrimination 09:00AM - 11:45AM
Presented by :
Morgan Thompson, Universität Bielefeld Early scales developed to measure experiences of everyday racial discrimination employ an interpersonal schema of the racial discrimination construct (e.g, McNeilly 1996; Williams 1997; Krieger et al. 2005). For example, the Perceived Racism Scale conceives of racial discrimination as “a belief or attitude that some races are superior to others and discrimination based on such a belief” (McNeilly 1996, 155). However, racial discrimination is now recognized to be a more expansive construct that includes structural oppression. Measures of discriminatory experiences of structural oppression lag behind. In this talk, I argue that the concept of path dependence is useful for understanding how measurements of everyday racial discrimination in psychology and sociology have (1) developed to primarily measure features of interpersonal discrimination and (2) frequently fail to measure cases of structural discrimination. Path dependent systems are those in which a particular change in a system’s history reinforces its development along particular “paths” and hinders other “paths”. The concept of path dependence gained traction in economics, social science, and science and technology studies (e.g., David 2001) to analyze why some standards set by a previous technology are perpetuated in future designs, even when alternatives may be as good or better. The classic but contested example is the QWERTY organization of keyboards (David 1985; Leibowitz and Margolis 1990). According to David (1985), the QWERTY layout was chosen to solve a technical issue with previous typewriter designs, namely, putting commonly used letters further apart to avoid errors when congruent type bars are pressed in sequence by . While we no longer face this technical problem, QWERTY layouts persist even when superior alternatives exist, such as the Dvorak layout. Path dependence can help explain why interpersonal scales have dominated the measurement of racial discrimination in psychology and sociology. The features of measurement validation reinforce the early focus on an interpersonal schema of the racial discrimination construct because earlier published scales become standards by which to evaluate future scales. When multiple measurement scales are intended to measure the same construct, we assess their convergent validity: the extent to which both measures do in fact measure the same construct. In psychology, convergent validity is often measured by assessing correlation coefficients of the scores on those measures. There is no explicit rule to determine what strength of correlation is required, so judgments are made implicitly and case-by-case. Further, an imperfect correlation is to be expected: noise in the data is expected and different scales ought to have some relevant differences. Small differences in scales can sometimes lead to importantly distinct results (e.g., one or two questions for race and ethnicity in the U.S. Census would impact the amount of self-identifying Hispanic people; Valles 2021). Thus, when implicit judgments require high convergent validity between existing scales and new scales, path dependence can emerge. Applying this analysis to the case of everyday racial discrimination measures, this path dependence has led to a focus on interpersonal discrimination over structural discrimination. | ||
09:00AM - 11:45AM Fort Pitt | Equivalence, Reality, and Structure in Physics Speakers
Jill North, Rutgers University
Clara Bradley, University Of California Irvine
Isaac Wilhelm, National University Of Singapore
Neil Dewar, Speaker, University Of Cambridge
Moderators
Noel Swanson, Reviewer, University Of Delaware This proposed symposium brings together four philosophers of physics to discuss the interrelated web of topics surrounding theoretical equivalence in physics, the structure of physical theories, and the interpretation of those theories. In particular, it offers a variety of perspectives on the recent flowering of work on formal notions of equivalence: the different symposium participants give different accounts of the significance of such work, and of where that significance derives from. It also addresses topics such as the role of symmetry in physics, whether isomorphic models must represent the same physical situation, and the role of convention in physical theorising. Equivalence and Convention 09:00AM - 11:45AM
Presented by :
Neil Dewar, Speaker, University Of Cambridge Teitel (2021) argues that formal approaches to equivalence cannot be illuminating, since they run afoul of trivial semantic conventionality: the idea that “any representational vehicle can in principle be used to represent the world as being just about any way whatsoever.” In this paper, I consider the relationship between theoretical equivalence and convention. First, I review the notion that conventions may give rise to equivalences: in particular, that we should regard theories differing “merely by a choice of convention” as equivalent to one another. I consider the importance of this idea for both the debates over geometrical conventionalism, and for Carnap’s Principle of Tolerance (Carnap 1934). However, this idea encounters difficulties when we confront it with the following question: can there be different conventions about equivalence? That is, consider two communities which use the same representational vehicles; but whereas one community regards those vehicles as equivalent when they stand in a certain relation, the other community does not. On the face of it, affirmation of the Principle of Tolerance would appear to force the Carnapian to side with the former community, suggesting that tolerance has its limits—in other words, that it cannot tolerate the intolerant. I suggest that this conclusion is too hasty. A thoroughgoing Carnapianism is, in fact, possible. In order to do so, we should apply the Principle of Tolerance at two different ‘levels’. On the one hand, at the meta-level, it instructs us to regard these two communities as working in different frameworks, which differ from one another over which inferences are permissible but not over substantive matters of fact. On the other hand, at the object-level, it constitutes an advertisement for the more liberal framework: here, it amounts to the observation that admitting more equivalences, and hence permitting more inferences, has pragmatic advantages. I return to the question of trivial semantic conventionality. If we subscribe to the Principle of Tolerance, do we risk collapsing to the position where all representational vehicles whatsoever have the same content? I argue that this is not the case, and that the above analysis indicates why. Although the Principle of Tolerance is presented as a general injunction to regard frameworks as equivalent, there is always the possibility of semantically ascending; when we do so, we are considering the merits of a (meta-)framework that regards those frameworks as equivalent, relative to one that does not do so. And although the (object-level) Principle of Tolerance constitutes a pragmatic advantage for the more tolerant framework, it may be outweighed by other pragmatic considerations. I conclude by suggesting that this gives us the resources to understand the significance of formal work on theoretical equivalence. The existence of an appropriate formal relationship between two theories indicates that there are not significant pragmatic advantages to distinguishing them—and in particular, suggests that regarding the two theories as equivalent will not obstruct our empirical theorising. Hence, a formal equivalence is what licences application of the (object-level) Principle of Tolerance in cases such as these. Theoretical Equivalence Made Easy 09:00AM - 11:45AM
Presented by :
Isaac Wilhelm, National University Of Singapore I formulate an account of theoretical equivalence for effective quantum field theories. To start, I propose the `Easy Ontology' approach to interpreting what effective theories say about the physical world. Then I show how the Easy Ontology approach can be used to articulate an account of theoretical equivalence. Finally, I discuss the relationship between this account of equivalence and accounts based on formal, mathematical structures. According to the Easy Ontology approach, for each energy level Λ and each effective theory ????Λ at that energy level, the physical propositions described by ????Λ are propositions about quantum fields, particles composed of quanta, physical interactions among fields and particles, transition amplitudes, correlation functions, structures, and so on, all of which interact at Λ. Some effective theories are formulated using terms that denote physical fields rather than physical particles; the contents of those theories include propositions that mention fields only. Other effective theories are formulated in terms that denote physical fields and physical particles both; the contents of those theories include propositions that mention fields and particles. There are further questions, of course, about which of these theories are true, which are more fundamental, which theories' mathematical expressions provide the most perspicuous representation of reality, and so on; and the Easy Ontology approach does not answer questions like these. But with regards to the question of what these theories actually say about the world, there is—according to the Easy Ontology approach—a very straightforward answer: what exists, according to these theories, is whatever these theories say exists. The Easy Ontology approach can be used to formulate an attractive account—call it the `Same Content' account—of theoretical equivalence among effective theories. According the Same Content account, effective theories ????1,Λ and ????2,Λ are equivalent just in case they describe exactly the same class of physical propositions. So ????1,Λ is equivalent to ????2,Λ just in case the physical propositions they describe—as given by the Easy Ontology approach—are the same. The Same Content account preserves certain basic insights of the literature on formal structures, formal symmetries, and theoretical equivalence. According to some effective theories, various formal, mathematical structures correspond to structures in the physical world. And the Same Content Account implies that for two such theories to be equivalent, those physical structures must be identical. But according to the Same Content account, there is more to theoretical equivalence than any one formal criterion captures. (In this way, Same Content respects both a view of equivalence discussed by North (2021) and a pluralist approach to equivalence discussed by Dewar (2019).) In fact, even isomorphism is not sufficient for equivalence: two effective theories may be mathematically isomorphic, and yet be non-equivalent, according to the Same Content account. (In this way, Same Content respects the view—a version of which is discussed by Bradley (this symposium)—that isomorphic theories may be physically distinct.) Theoretical equivalence is, ultimately, a matter of sameness of theoretical meaning; not (solely) a matter of formal correspondence. The Representational Role of Sophisticated Theories 09:00AM - 11:45AM
Presented by :
Clara Bradley, University Of California Irvine How should one remove “excess structure” from a physical theory? Dewar (2019) presents two ways to undertake such a task: first, one could move to a reduced version of the theory, where one defines the models of the theory only in terms of structure that is invariant under the symmetries of the theory. Second, one could move to a sophisticated version of the theory, where symmetry-related models of a theory are isomorphic. Dewar argues that despite these alternatives attributing the same structure to the world, the sophisticated version can have explanatory benefits over the reduced version. Here, I consider a different argument that might be considered in favour of the sophisticated view: the differences between isomorphic models in a sophisticated theory can have representational significance. Thus, a reduced version of a theory may be unable to distinguish physical situations that can be distinguished in the sophisticated version of the theory. That isomorphic models can be used to represent distinct situations has been argued before (for instance, Belot (2018); Fletcher (2018); Roberts (2020)). However, these arguments do not directly show that the reduced version of a theory is unable to represent such situations. Indeed, if the reduced version of a theory posits the same structure as the sophisticated alternative, how can the sophisticated version represent a greater number of physical possibilities? I will argue that this tension can be resolved by considering more carefully the ways that isomorphic models can be used to represent distinct situations. This will provide support for the claim that the sophisticated version of a theory can be representationally advantageous to its reduced version, without rejecting the claim that the sophisticated and reduced version of a theory ascribe the same structure. In particular, I will argue that the sense in which isomorphic models can represent distinct situations is that one has the freedom to define additional structure within the models of the theory that can be used to represent a physical standard of comparison. It is the ability to define this additional structure that may be lacking in the reduced version of the theory, since one may not have the same representational freedom within the models of the reduced theory. I will use this argument to make a more general claim: the language that one uses to define the models of a theory is not always merely a conventional choice. One may choose one way of representing the models of a theory because it provides one with the resources to define additional structure that plays a physical role, even though this structure is not part of the structure of the theory. I suggest that more attention should therefore be paid to the role that model theory plays in determining which version of a theory should be adopted. A Middle Way on Theoretical Equivalence 09:00AM - 11:45AM
Presented by :
Jill North, Rutgers University There has been a widening divide between two broad approaches to theoretical equivalence in physics: to what we mean when we say that two physical theories are fully equivalent, saying all the same things about the world but perhaps in different ways. On the one side is the formal approach to equivalence. Formal accounts say that physical theories are equivalent when they are formally or structurally or mathematically equivalent (in addition to being empirically equivalent). Proponents then work on figuring out which formal notion is the right one. On the other side is a growing resistance to formal approaches. Opponents note that physical theories consist of more than their formal apparatus, so that questions concerning the equivalence of theories must involve more than their formal features. They point to cases of theories that are equivalent in various formal and empirical respects, but nonetheless differ in what they say about the world. Some have gone so far as to conclude that the formal results being generated have no significance beyond pure mathematics. I advocate a middle ground. A formal equivalence of the right kind is important to questions of equivalence in physics: this is necessary (if not sufficient) for wholesale theoretical equivalence, as we can see in some familiar examples. More, it is not immediately clear, and is worth investigating, what type of formal equivalence is relevant to reasonable judgments of equivalence in physics. At the same time, since any formalism can be made to represent any kind of physical reality simply by brute stipulation, it’s also not right to claim that a formal equivalence must be physically significant in all cases, without further ado. A more nuanced position is in order. In actual scientific practice and theorizing, the choice of formalism is not completely up for stipulational grabs, in that there are better and worse choices of representational vehicle, given standard theoretical criteria: there are good scientific reasons for choosing one formalism over another. Some interpretive and physical stipulations will be made, that is, but certain choices of formalism will be more natural or well-suited than others, given those assumptions. (We should distinguish between the equivalence of descriptions, in the sense of their saying or representing the same things, and the relative naturalness or well-suitedness of descriptions, in the sense of their saying or representing those things in better or worse ways.) As a result, we can learn things of physical significance by examining a theory’s (best) formulation, and its formal relationships to other mathematical formulations. We just have to be careful to mind the stipulations we make, and to be explicit about the different respects, formal and not, in which theories can be equivalent to one another. I will give examples of familiar, reasonable judgements of equivalence and naturalness in physics to illustrate all this; suggest that, properly understood, a structural equivalence of some kind is necessary for wholesale equivalence in physics; and draw connections to the structured view of scientific theories recently advocated by Hans Halvorson. | ||
09:00AM - 11:45AM Smithfield | Physical Signatures of Computation Speakers
Neal Anderson, Presenting Co-Author, University Of Massachusetts Amherst
Gualtiero Piccinini, UMSL
David Barack, Presenting Author, University Of Pennsylvania
Paula Quinon, Presenting Co-Author, Warsaw University Of Technology
Paweł Stacewicz, Presenting Co-Author, Warsaw University Of Technology
J. Brendan Ritchie, Presenting Author, National Institute Of Mental Health
Danielle Williams, Graduate Student, University Of California, Davis
Moderators
Gualtiero Piccinini, UMSL In recent years, a new generation of scholars have begun searching for physical signatures of computation. That is, they have begun investigating what it takes for a physical system to implement a computation with unprecedented attention to the scientific practices involving computation, including computer science and engineering, in hopes of identifying physical differences between systems that implement computations and systems that don't. This symposium will introduce recent progress in this area to a wider audience. It will illustrate how our understanding of physical computation has deepened and become more sophisticated and how it can be informed by scientific practices that were not on the horizon of most philosophers of science until quite recently. The participants in this symposium are also engaged in a lively debate with one another, which will stimulate both them and the audience to make further progress. Providing an adequate account of computational implementation has real implications for the foundations of the computational theory of cognition, the notion of biological computation, the construction of novel forms of computers, the foundations of physics, and more. Thus, this symposium is likely to attract a wide audience. Implementation, individuation, and triviality in computational theories 09:00AM - 11:45AM
Presented by :
Danielle Williams, Graduate Student, University Of California, Davis Distinguishing between physical systems that compute and those which do not requires an explanation that posits the relation between the formal concept of computation and the physical implementing system. There is confusion about how an answer to the implementation question is to be articulated leading to claims about implementation which are not sufficiently distinguished from claims about individuation. I argue that confusions about computational triviality have, in part, given rise to this conflation. In this paper, I demonstrate that there are two distinct types of triviality: a trivialization of the implementation relation and a trivialization of the individuation conditions. Some Myths of Symbolic Computation 09:00AM - 11:45AM
Presented by :
J. Brendan Ritchie, Presenting Author, National Institute Of Mental Health It is shown that supposedly paradigmatic examples of classic architecture do not contain local representations. In particular, Turing Machines (TMs) carry out transformations over sub-symbols where only the initial and final states may involve interpretable strings. In contrast, examples of computing systems with local representations lack the coding efficiency that is claimed to be paradigmatic of classical architectures. Thus, distributed, sub-symbolic computation should also be considered as a hallmark of classical architectures. In light of this and other commonalities, it is proposed that the traditional divide between connectionist and classical architectures is more apparent than real. Analog Computation, Continuous or Empirical: The perspective of Carnapian Explication 09:00AM - 11:45AM
Presented by :
Paula Quinon, Presenting Co-Author, Warsaw University Of Technology
Paweł Stacewicz, Presenting Co-Author, Warsaw University Of Technology We discuss two different ways that the term “analog” (as opposed to “digital”) is used in the methodology of computer science and those engineering disciplines that are related to computer science. We show that formal models of computation on real numbers provide, indeed, an explication of what corresponds to the intuition that certain devices operating on continuous quantities perform computations. We call this “the analog continuous thesis” (“the AN-C thesis”), and we show how it is similar to other theses used to explicate computation, such as the Church-Turing thesis or the Cobham-Edmonds thesis. The Robust Mapping Account of Implementation 09:00AM - 11:45AM
Presented by :
Gualtiero Piccinini, UMSL
Neal Anderson, Presenting Co-Author, University Of Massachusetts Amherst According to the robust mapping account we propose, a mapping from physical to computational states is a legitimate basis for implementation only if it includes only physical states relevant to the computation, the physical states have enough spatiotemporal structure to map onto the structure of the computational states, and the evolving physical states bear neither more nor less information about the evolving computation than do the computational states they map onto. When these conditions are in place, a physical system can be said to implement a computation in a robust sense, which does not trivialize the notion of implementation. Computation with Neural Manifolds 09:00AM - 11:45AM
Presented by :
David Barack, Presenting Author, University Of Pennsylvania Recent research in cognitive neuroscience has uncovered so-called neural manifolds that play a central role in explanations of behavior. Revealed through the use of a range of dimensionality reduction techniques, these manifolds are entities in low-dimensional spaces contained in high-dimensional neural spaces. In this paper, I explore a possible computational interpretation for the role of manifolds in cognition. I argue that manifolds provide evidence for what neural computations are performed. I then turn to argue that manifolds also provide evidence for how inputs are transformed into outputs during neural computation. | ||
09:00AM - 11:45AM Sterlings 2 | Values and Scientific Institutions Speakers
Kevin Elliott, Michigan State University
Justin Biddle, Presenter, Georgia Institute Of Technology
Heather Douglas, Presenter, Michigan State University
Manuela Fernández Pinto, Presenter, Universidad De Los Andes
Eduardo Martinez, University Of Cincinnati
Moderators
Matthew Brown, Cognate Societies Chair, Southern Illinois University Scholarship on values in science has exploded in recent years. Nevertheless, with the exception of some work on topics like patent policies, funding structures, and corporate influences on science, most scholarship on science and values has focused on the influences of values on individual scientists. The time is now ripe for philosophers of science to turn greater attention to the ways that institutional structures shape values in science. This symposium approaches the topic of values in scientific institutions from four different angles. First, it provides an overview of the variety of different ways in which institutions can influence values in science. Second, it explores how organizations can influence the values embedded in scientific research through their aims, structure, and culture. Third, it considers the kinds of institutions that are best suited to fostering and supporting ethical research. Fourth, it examines the ways in which many different kinds of institutions (e.g., funding, regulatory, political, and academic) all tend to promote private, commercial interests in scientific research. Together with a commentary provided from the perspective of political philosophy, the papers in this symposium provide a roadmap for future efforts to broaden the literature on values in science to encompass the institutional level. Commentary 09:00AM - 11:45AM
Presented by :
Eduardo Martinez, University Of Cincinnati This talk will provide commentary from the perspective of political philosophy about the other papers in the session. In particular, I will apply insights from democratic theory about the nature of representation and its corresponding responsibilities, the epistemic and moral value of deliberation, and the benefits of institutional design aimed at safeguarding freedom from domination. Organizations and Values in Science and Technology 09:00AM - 11:45AM
Presented by :
Justin Biddle, Presenter, Georgia Institute Of Technology This presentation explores the relationship between values in scientific and technological research, on the one hand, and features of the organizations that conduct that research, on the other. Organizational features to be highlighted include organizational aims and strategies, organizational structure, and organizational culture; case illustrations will be drawn from data sciences and machine learning. The relationship between values and organizations is under-studied by philosophers of science, and the framework developed in this presentation can provide a starting point for further research into organizational levers for the management of values in science and technology. Institutions and the Division of Ethical Labor in Science 09:00AM - 11:45AM
Presented by :
Heather Douglas, Presenter, Michigan State University Current institutional structures for ethics in science focus on oversight—gatekeeping or regulatory compliance. These structures ensure scientists make the ethical decisions deemed appropriate and sanction scientists who do not, and are viewed as external to the research agendas scientists choose to pursue, and as impediments to research that must be overcome to get on with doing science. Senses of societal responsibility have shifted in the 21st century, and it is imperative to craft institutions to meet the full scope of responsibilities, while bringing the work into the heart of science. Pandemic Science and Commercial Values: An Institutional Account for Values in Science 09:00AM - 11:45AM
Presented by :
Manuela Fernández Pinto, Presenter, Universidad De Los Andes Acknowledging that scientific research today is mainly conducted in the private sphere with commercial interests (or values) in mind, becomes crucial for understanding the roles of values in science today, as well as to imagining ways of counteracting some of the undesirable influence of such values. In the contribution to this symposium, I continue to argue in this direction, showing how different institutional frameworks (i.e., funding, regulatory, political, and academic institutions) tend to promote, primarily, commercial and private interests, even in situations with high social stakes. A Taxonomy for Studying How Institutions Shape Values in Science 09:00AM - 11:45AM
Presented by :
Kevin Elliott, Michigan State University Most previous scholarship on the topic of values in science has focused on individuals. The time is now ripe to study how values permeate science through institutional systems. In order to move this scholarship forward, the present paper develops a taxonomy of major ways in which institutional systems can shape the influences of values on scientific practice. To do so, it examines a case study involving debates about the clinical practice guidelines for preventing and treating Lyme disease. Future research efforts can use this taxonomy as a guide for exploring values in science at an institutional level. | ||
09:00AM - 11:45AM Sterlings 1 | Data, Dogma, or Duty? Conservation Science and the Role of Ethical Values Speakers
Evelyn Brister, Rochester Institute Of Technology
Soazig Le Bihan, University Of Montana
Stefan Linquist, University Of Guelph
Jay Odenbaugh, Lewis & Clark College
Roberta Millstein, University Of California, Davis
Moderators
Katherine Valde, Wofford College Recently the topic of values in science has been extremely important in the philosophy of science. Initially, the debates were over whether and what sorts of values are present in the sciences. For example, are they epistemic or non-epistemic? However, if one grants non-epistemic values find their way into scientific practice, then what role should they play? For example, given the problem of inductive risk, are non-epistemic values inescapable? Ecology and conservation biology are especially relevant for these debates since they have always stood in the breach between "pure" and "applied" concerns. Early conservation biologists such as Michael Soulé argued that it is a "crisis science" inherently laden with ethical values (1985). Though subsequent debates have challenged Soulé over his advocacy for certain values, critics have granted that conservation biology is value-laden (Karieva and Marvier 2012). In this session, we explore the role of values in ecology and conservation biology. The coevolution of science and values in Aldo Leopold's thinking 09:00AM - 11:45AM
Presented by :
Roberta Millstein, University Of California, Davis Michael Soulé, co-founder of the Society for Conservation Biology and its first President, is widely considered to be the founder of conservation biology (Sanjayan, Crooks, and Mills 2000). In setting out his vision for the field, Soulé argued that it is a crisis-oriented discipline like cancer-biology, which, he suggested, implies that ethical norms are an inherent part of conservation biology. He stated that the ethical norms include value judgments such as the postulate that the "diversity of organisms is good" which "cannot be tested or proven" (Soulé 1985, 730). In coming to these views, Soulé cited several scholars who influenced him, including Aldo Leopold, an early-mid 20th century hunter, forester, wildlife manager, conservationist, and professor who has likewise been extremely influential in conservation biology and related fields. Indeed, many of the ideas that Soulé described have precedents in Leopold's thinking. Leopold famously stated that the land "has value in the philosophical sense" and that "an ethical relation to land" should have "a high regard for its value" (Leopold 1949, 223). Moreover, Leopold explicitly compared health in humans and health in the land, suggesting that the science of doctoring the land had not really begun yet (Leopold 1949). Meanwhile, the ecologist typically "lives alone in a war of wounds" as a "doctor who sees the marks of death in a community that believes itself well" (Leopold 1947). Given his foundational status and his explicit commitments to value-driven science, Leopold is a promising figure to examine in trying to understand the role of values in conservation biology. I argue that Leopold's scientific beliefs and value beliefs co-evolved over the course of his life. Values were present in the very building blocks of Leopold’s understanding of the environment. From his father Carl Leopold, Aldo learned to appreciate and enjoy the natural world; his father also impressed upon him a hunter’s ethics (Meine 2010, 18). By the end of his life, the hunter of prey and the eradicator of predators had become a defender of predators and other species as well as land communities as wholes. He groped toward understanding land health as the goal of a land ethic as well as the ecological mechanisms that underlaid it. This extends the picture of the relation of science and values described by Elizabeth Anderson (2004), which suggests that "if values can legitimately influence empirical theories, then empirical theories can legitimately influence our value judgments” (Anderson 2004, 2). According to Anderson, facts can count as evidence for value judgments, and value judgements can help us see certain facts. For Leopold, this bidirectional influence occurred over time, which, although perhaps not Soulé's "test" or "proof," arguably offered an informed living laboratory in which both his (sometimes entangled) science and values could advance. This examination of Leopold's trajectory will inform how we might think of the role of values in conservation biology as well as how we might think of the role of values in science more generally. Biodiversity as stealth policy advocacy 09:00AM - 11:45AM
Presented by :
Stefan Linquist, University Of Guelph The conservation ecologist Robert Lackey (2005, 2013) describes stealth policy advocacy as strategy deployed in the pursuit of “policy-based science.” As a proponent of the value-free ideal, Lackey argues that the adoption of ethical values by scientists (in a professional capacity) undermines their credibility and erodes public trust. Stealth policy advocacy is especially pernicious, he adds, because it presents scientific concepts as if they were purely empirical when, in fact, they “contain tacit policy preferences and thus, by extension, promote particular policy options” (2005). Lackey cites ecosystem health and alien species as examples of stealth-policy concepts that should be eliminated. The first part of this paper argues that stealth policy advocacy should be resisted by both proponents and critics of the value-free ideal. Critics like Longino (1996), Douglas (2009), and Elliott (2020) point to legitimate roles for values in hypothesis confirmation conceptual framing. However, in order to avoid systemic bias, these values must be transparent and open for debate, not tacitly disguised as value-neutral. The second part of this paper presents survey evidence (X-Phi) from a sample of practicing ecologists and conservation managers. I show that they employ two conceptions of biodiversity: a value-neutral conception that equates biodiversity with the diversity of units at some level (e.g., species richness), and a value-laden conception that equates biodiversity with the diversity of units plus integrity/naturalness—a normative property. Individual scientists’ reliance on a given conception was not explained by their subdiscipline, research focus, level of professional standing, their preferred explicit definition of biodiversity, nor their explicit reasons for valuing biodiversity. The coexistence of these two conceptions promotes equivocation and makes it practically difficult to tease apart the normative from the empirical. At the same time, the presence of a purely empirical concept alongside one that is normatively “thick” allows conservation scientists to deflect calls for elimination of the value-laden concept (Santana 2014). Specifically, it becomes possible to strategically deploy the value-laden concept while reverting, under scrutiny, to the value-neutral concept as a sort of alibi. I see this predicament as pointing to both the desirability and the impracticality of biodiversity eliminativism. Douglas, Heather. 2009. Science, Policy, and the Value Free Ideal. University of Pittsburgh Press, Pittsburgh. Elliott, Kevin C. 2020. Framing conservation: “biodiversity” and the values embedded in scientific language. Environmental Conservation 47: 260-268. Lackey, Robert T. 2003. Appropriate use of ecosystem health and normative science in ecological policy. pp. 175-186. In: Managing for Healthy Ecosystems, David J. Rapport, William L. Lasley, Dennis E. Rolston, N. Ole Nielsen, Calvin O. Qualset, and Ardeshir B. Damania, (eds), Lewis Publishers, Boca Raton, Florida. Lackey, Robert T. 2016. Keep science and scientists credible, avoid stealth policy advocacy. The Bulletin of the Ecological Society of America 46: 14-16. Longino, Helen. E. 1996. Cognitive and non-cognitive values in science: Rethinking the distinction. pp. 39-58. In Feminism, Science and the Philosophy of Science. L. H. Nelson (ed). Kluwer Academic Publishers: Great Britain. Santana, Carlos. 2014. Save the planet: Eliminate biodiversity. Biology & Philosophy 29: 761-780. Science is no Democracy 09:00AM - 11:45AM
Presented by :
Soazig Le Bihan, University Of Montana Philosophers agree that the value-free ideal is neither an accurate nor desirable model for science. Science typically requires non-epistemic value judgments. Debates remain as to what kind of non-epistemic values can legitimately influence science, and in what ways. One proposal is that “when scientists must appeal to nonepistemic values in the course of their work, they should appeal to democratic values—roughly, the values held by the public or its representatives” (Schroeder 2021, 2. See also Intemann 2015). In this paper, I argue that this view fails to solve the problems it aims to solve while raising other serious issues. The paper relies on two case studies: the controversy over wolf population management in the Yellowstone area and the long-lasting dispute over management of the National Bison Range on the Flathead Reservation. The democratic-endorsement view is supposed to solve two problems: 1. Legitimacy. Scientific research influences lay people’s daily lives, either through policy or direct, pervasive impact. Such influence qualifies as non-epistemic authority over the people. Legitimate authority in liberal democratic society is grounded in democratic endorsement. Hence, non-epistemic value influence in science is legitimate only if it appeals to democratically endorsed values. 2. Public Trust. The fall of the value-free ideal has eroded public trust. If non-epistemic value judgments influence science, then the people will and ought to trust science only if such values are representatives of their own values. Democratic endorsement is thus the best ground for warranted public trust in science. The democratic-endorsement view most likely fails to solve the problems above. It relies on the assumption that our society can find some overlapping consensus regarding non-epistemic values, which can then ground scientific consensus and public trust. Unfortunately, such consensus is likely lacking, especially in ecology and conservation biology. The controversy over Yellowstone wolf population management is a case in point (Smith et al. 2016). Value conflict and polarization undermine democratic consensus and the democratic endorsement view. The view also raises some serious issues, most prominently the issue of marginalization. Democratic consensus most likely ignores the needs and wants of historically marginalized communities. A study of the controversy over the management of the National Bison Range on the Flathead Reservation shows that democratically endorsed values may build upon a history of wrongful prejudice (Upton 2014). The democratic-endorsement view also faces the issue of non-neutral expertise. It is often impossible (and misguided) to be both informed and neutral. Expertise informs non-epistemic value judgments. The public is often in no position to make properly informed value judgments. An alternative to the democratic-endorsement view, one that respects both the public’s interest, including minorities’, and expertise is called for. Schroeder, S. Andrew. 2021. Democratic values: a better foundation for public trust in science. The British Journal for the Philosophy of Science, 72(2), 545-562. Intemann, Kristen. 2015. Distinguishing between legitimate and illegitimate values in climate modeling. European Journal for Philosophy of Science, 5, 217–232. Upton, Brian. 2014. Returning to tribal self-governance partnership at the National Bison Range Complex: Historical, legal, and global perspectives., 35, 51-146. Smith, Douglas.W., White, P.J., Stahler, Daniel.R., Wydeven, Adrian. and Hallac, David.E., 2016. Managing wolves in the Yellowstone area: Balancing goals across jurisdictional boundaries. Wildlife Society Bulletin, 40(3), 436-445. Values in Conservation Science: Deliberation and Practice 09:00AM - 11:45AM
Presented by :
Evelyn Brister, Rochester Institute Of Technology Over the past twenty years, philosophers of science have given sustained examination to the role that values play in science, identifying positive roles for both cognitive and noncognitive values. These include value judgments made during research, such as the conceptualization of phenomena, data selection and analysis of results, as well as in applications of scientific knowledge for policy. Consequently, many philosophers have concluded that although there is a risk that dogmatically held values may bias scientific research, there is also a legitimate role for values in scientific judgment. The distinction between the two is whether scientists are transparent about their value assumptions and methodological choices and whether the social and ethical values at play are widely shared and have been subjected to deliberation (Elliott 2017, 14-15). Conservation science overtly rests on normative postulates, and its journals have long made space for normative debates about the fit between conservation practices and the normative goal of preserving biological diversity (Soulé 1985). In this paper I identify a gap between, on the one hand, philosophers’ endorsement of transparent deliberation about values and, on the other hand, the practical methods, shared norms, and concrete guidance required to move from discussing values in a theoretical way to incorporating them into practice. Although conservation scientists discuss values openly, norms have not developed for systematic moral deliberation and, after deliberation, action. I argue that, while it is necessary, at times, to talk about scientific research and values separately (in philosophical terms, to distinguish facts and values), it is also essential, following discussion, to systematically incorporate values into conservation research. I investigate two specific techniques for incorporating values into science with regard to debates in conservation science about managed relocation. First, I identify how knowledge gaps block conservation practice. Knowledge gaps are created and maintained by a bias in favor of theory over practice and a conservative bias toward existing frameworks for conservation practice rather than innovation. Because theoretical research carries more prestige than applied research, the majority of new research in conservation science describes species and ecological relationships—states, causes, and mechanisms—rather than designing, implementing, and evaluating conservation interventions (Williams, Balmford & Wilcove 2020). The consequent lack of practical knowledge entrenches value-based arguments against novel interventions on the grounds that their consequences are unknown. A second technique to encourage discussions of values involves reforming risk assessment frameworks. Typically, risks posed by interventions are measured against the status quo or a past benchmark rather than being compared to likely future states given rates of biodiversity loss and ecological degradation. By ignoring the urgency of conservation needs, this practice systematically biases risk assessment frameworks toward inaction. In sum, I use examples from managed relocation to demonstrate how an ethos of restraint follows from uncertainty about how to integrate values into conservation research. In contrast, an ethos of responsible action follows from a more thoroughgoing analysis of the role values can play in action-oriented conservation research. Don’t Believe the Hype? Non-epistemic Values and the Debates Regarding Yellowstone, Wolves, and Trophic Cascades 09:00AM - 11:45AM
Presented by :
Jay Odenbaugh, Lewis & Clark College One of the most important proposed examples of a trophic cascade concerns the reintroduction of grey wolves into Greater Yellowstone Ecosystem (Ripple et. al. 2001, 2014, 2015). As the story goes, the reintroduced grey wolves have reduced elk populations, and this has encouraged a variety of plant and animal species to increase. However, some have argued that this example’s success in the academic and public imagination has not been the result of the empirical evidence in its favor (Peterson et. al. 2014; Marris 2018). Rather, it is because it promotes various environmental values at play. These biases in fact have led to methodologically problematic science. Philosophers of science have been exploring values in science, and I will do so in this debate. Following the work of Helen Longino (1990, 2002) and Solomon (2010) in particular, I argue that the presence of values is not especially problematic provided diverse evaluative commitments are manifest in the research process. I explore this case study to see to what extent transformative criticism has taken place and where improvements can be made to the epistemic structure of these debates. Longino, H.E., 1994. Science as social knowledge. Princeton university press. ______. 2018. The fate of knowledge. Princeton University Press. Marris, E., A good story: Media bias in trophic cascade research in Yellowstone National Park. In Effective Conservation Science, pgs. 80-84. Oxford University Press. Peterson, R. O., Vucetich, J. A., Bump, J. M., and Smith, D. W. (2014). Trophic cascades in a multicausal world: Isle Royale and Yellowstone. Annual Review of Ecology, Evolution, and Systematics 45, 325–45. Ripple, W. J., Beschta, R. L., Fortin, J. K., and Robbins, C. T. (2014). Trophic cascades from wolves to grizzly bears in Yellowstone. Journal of Animal Ecology 83, 223–33. Ripple, W. J., Beschta, R. L., Fortin, J. K., and Robbins, C. T. (2015). Wolves trigger a trophic cascade to berries as alternative food for grizzly bears. Journal of Animal Ecology 84, 652–4. Ripple, W. J., Larson, E. J., Renkin, R. A., and Smith, D. W. (2001). Trophic cascades among wolves, elk and aspen on Yellowstone National Park’s northern range. Biological Conservation 102, 227–34. Solomon, M., 2007. Social empiricism. MIT press. | ||
09:00AM - 11:45AM Benedum | On the use of racial categories in medicine across geographic and national contexts Speakers
Zinhle Mncube, Department Of History And Philosophy Of Science, University Of Cambridge
Azita Chellappoo, The Open University
Suman Seth, Marie Underhill Noll Professor Of The History Of Science, Cornell University
Daphne Martschenko, Research Fellow, Stanford Centre For Biomedical Ethics, Stanford University
Phila Msimang, PhD Student, Macquarie University & Stellenbosch University
Moderators
Devora Shapiro, Cleveland Clinic And Southern Oregon University The use of racial categories in medical research and practice remains a topic of contestation and heated debate. However, much of the philosophical debate has been geographically limited, focusing primarily on the United States (US) context and the use of race in the US sense. Given that racial categories and racial schemas vary significantly across the world, and race as a variable is deployed in many medical contexts outside the US, philosophical attention to the use of race in medicine globally is crucial and timely. In this session, we aim to bring together scholars to address the role of race in medicine across varying geographic and national contexts. Through philosophical and historical analysis, we aim to address the epistemic, ethical, and practical challenges that arise when we deploy race in medicine globally, through focus on the key flashpoints of: (1) causation and measurement; and (2) categorization and classification. By bringing together scholars with expertise on these questions and how they arise in a range of settings, this session provides a global perspective on how racial categories are constructed, deployed, and shape medical practices from the lab to the clinic. Race-Medicine and Causation in the Eighteenth-Century British Empire 09:00AM - 11:45AM
Presented by :
Suman Seth, Marie Underhill Noll Professor Of The History Of Science, Cornell University The end of Richard Towne’sTreatise of the Diseasesmost Frequent in the West Indies (1726) takes up the description of “diseases to which the blacks are no strangers, but as far as I am informed they are utterly unknown in Europe.” Given that these afflictions seemed to be limited to a single race, one might imagine that Towne had in mind some innate cause for this difference in susceptibilities. To the contrary, however, an essentialist biology mattered little for his explanation: “Those Blacks are the more subject to it,” he wrote of “the Elefantiasis under the circumstances it occurs in the West Indies,” “who after severe acute fevers, long continued intermittents, or other tedious illnesses, are either much exposed to the inclemency of Rainy seasons, and the cold penetrating Dew of the evenings, or are constrained to subsist upon bad diet and undigestible unwholesome food.” In other words, the causes of the diseases were previous maladies, poor accommodations, and a wretched diet.” As if to make clear how little race mattered to his medicine, Towne observed that Europeans who lived like the enslaved population of the islands were just as liable to the Elefantiasis as those of African descent: “Sometimes white people, whose unhappy circumstances have reduced them to hardships but little inferior to what the Blacks are obliged to undergo, have given us proof that this disease is not limited to one colour.” Compare this multiplicity of plausible causes in a non-racialised account of disease to the paucity of causal explanations in one of the few eighteenth century medical writings to center racial difference. In 1756, the South Carolingian physician John Lining put forward a description of what he termed “the American yellow fever.” This “dreadful malady” attacked almost everyone, yet it spared one group: “There is something very singular in the constitution of the Negroes,” wrote Lining, “which renders them not liable to this fever…I never knew one instance of this fever amongst them.” ‘Constitutional singularity’ would be the only cause Lining would marshal to explain this peculiar racial discrepancy in susceptibility to the disease. The vagueness and lack of detail in causal accounts of disease for racialists, compared with the multiple and explicit causes of non-racialists is my focus here. Among the reasons that race was invoked so uncommonly as a cause in eighteenth century medicine, I argue, was because, in general, race was an explanandum in the period, not an explanans. That is, naturalistic arguments in the early modern period were far more concerned with the cause of race than with race as a cause. This was even more true in medicine, where Neo-Hippocratic explanations, which invoked the power of “Airs, Waters, and Places” as well as habituation to the climate, could explain why some populations were struck and others spared without invoking essential differences. In the eighteenth century, race was a solution in search of a problem, and the problems it found were few. The Others: Precision Medicine and Multiracial Individuals 09:00AM - 11:45AM
Presented by :
Daphne Martschenko, Research Fellow, Stanford Centre For Biomedical Ethics, Stanford University Precision medicine offers a precious opportunity to change clinical practice and disrupt medicine’s reliance on crude racial, ethnic, or ancestral categories by focusing on an individual’s unique genetic, environmental, and lifestyle characteristics. However, precision medicine and the genomic studies that are its cornerstone have thus far failed to account for human diversity. This failure is made clearer when looking at multiracial individuals who encapsulate a mosaic of different genetic ancestries. This presentation argues that precision medicine is failing multiracial individuals and relies on the same forms of crude categorization it seeks to unsettle. I provide examples of where multiracial individuals are being failed in genomic research, research translation, and public health. Until the scientific community creates inclusive solutions for multiracial individuals in medical genomics, precision medicine will continue to fall short in its aims. I conclude by offering a way more just and equitable path forwards for precision medicine. The Mismatch between Race and Biology in South Africa and its Implication for Health 09:00AM - 11:45AM
Presented by :
Phila Msimang, PhD Student, Macquarie University & Stellenbosch University South Africa has some of the most genetically diverse and the most genetically admixed human population groups on the planet. This is due to South Africa’s peculiar history, both social and biological. Nevertheless, people in the country are divided into four ‘population groups’ by which official agencies mean ‘demographic groups’ designed to correspond to Apartheid racial categories. These categories are “Black African,” “Coloured,” “Indian/Asian,” and “White”. Although there may be strong normative reasons for us to continue to use racial classifications in monitoring and reporting on issues of equity and health inequalities, here I argue that the use of racial classifications in health is such a blunt and imprecise instrument as to be dangerously misleading except when assessing the health consequences of racism. The racial or demographic population groups to which people in South Africa are sorted are so internally diverse as to constitute many distinct biological population groups whose variation is relevant to therapeutic interventions in different ways. These biological differences between race groups and most especially within race groups is important for the (lack of) predictive ability of race in medicine despite the broad correlation of health outcomes with race in South Africa. Each race group also comprises of different ethnic and cultural groups whose differences in environmental exposures have independent health consequences. This makes race of quite limited use in properly incorporating in analysis other social determinants of health that are not necessarily tied to racism. I argue that both the internal biological variation of race groups in South Africa and the differences in their environmental exposures makes the use of racial classifications inappropriate in clinical and biomedical settings. The only exception to this rule is the use of racial categories in tracking progress on equity and the elimination of racial inequalities. How Race Does and Does Not Travel in Medicine 09:00AM - 11:45AM
Presented by :
Azita Chellappoo, The Open University
Zinhle Mncube, Department Of History And Philosophy Of Science, University Of Cambridge Much of the literature on the nature and limits of using race as a scientific variable in medicine focuses primarily on United States (US) racial categories. This focus on the US is seemingly justified by ‘contextualism’ - the assumption that we must limit discussion of race and its deployment to a specific national context because “there is no transnationally valid ontology of race” (Ludwig, 2019), race does not travel across geographic or national contexts. Thus, American scholars are justified in restricting their arguments to a US ontology of race, and scholars in Brazil, the United Kingdom, India, and South Africa ought to similarly restrict their arguments (ibid.). We draw on two case studies of race-based correction in health measurement to illuminate the ways in which there are global continuities and discontinuities with the ways in which race enters into medicine. We argue that although the explanations for racial difference and their underlying racial ontologies differ across national contexts, nevertheless, a tension exists because the correction factor itself is made to travel across these contexts. This has the potential to pose unique ethical and political challenges. The first case study we draw upon is that of the history of how spirometric measurement became racialised in South Africa (SA). The spirometer is a test that doctors use to measure lung capacity for the diagnosis and treatment of respiratory disease. The spirometer is controversial because a ‘race correction’ factor is directly programmed into many commercially-available spirometers. In the US, spirometers either ‘correct’ the lung capacity of individual patients labelled ‘Black’ by 10-15%, for example, or use population-specific norms (Braun, 2015). First, 3 during Apartheid, South African researchers used American data in an effort to bolster their database of purported innate racial differences in lung function between white and Black South Africans. Second, historically, South African clinicians adopted US standards of race correction in their spirometers. Despite alternative explanations of racial difference in lung function, and a more biosocial conception of race in SA, clinicians and researchers relied on American standards of correction. The second case study we draw upon is that of Body Mass Index (BMI) thresholds. The ‘Y-Y paradox’ was proposed by two endocrinologists who juxtaposed their own (identical) BMIs with their differing levels of body fat (9.1% for the British researcher compared with 21.2% for the Indian researcher). This ‘paradox’ has been expanded on, forming the broader notion of the ‘thin-fat Indian’: a body which is thin morphologically but metabolically “obese”. This has led to changes in public health policy (notably, a lowering of the BMI threshold for clinical surveillance or intervention) both for the Indian population and for South Asian diaspora in places such as the United Kingdom (UK). Although the explanations for differing rates of metabolic illness frequently differ between India and the UK, the racial or ethnic groups that are taken to be the target of intervention differ, and, plausibly, the underlying racial ontologies differ, the lowered BMI threshold continues to be in place in these disparate settings. | ||
09:00AM - 11:45AM Birmingham | Multiscale Modeling Across the Sciences: Tailoring Techniques to Particular Contexts Speakers
Robert Batterman, Speaker, University Of Pittsburgh
Julia Bursten, University Of Kentucky
Zoe Simon, University Of Pittsburgh
Jennifer Jhun, Reviewer, Duke
Collin Rice, Assistant Professor Of Philosophy, Colorado State University
Jill Millstone, Participant, University Of Pittsburgh
Moderators
Ryan McCoy, Poster Presenter, Session Chair, University Of Kentucky The aim of this symposium is to generate a more unified, yet pluralistic, framework for thinking about how similarities and differences in scientists' modeling goals across various modeling contexts influence which multiscale modeling techniques are justified in those contexts. To accomplish this, the symposium will bring together scholars from various stages of their careers to compare multiscale modeling approaches in physics, nanoscience, economics, and biology. What we find is that some of the modeling goals and practical constraints that influence multiscale modelers in these fields are common features of many modeling contexts-i.e., there are some features that are stable across these cases. However, there are also several unique methodologies that are tailored to specific pragmatic constraints and modeling goals of specific fields (or types of phenomena). This interdisciplinary analysis of multiscale modeling contexts will improve our understanding of where and why different multiscale modeling approaches are justified. Multiscale Modeling in Physics and Materials Science: Methodological Lessons from Representative Volume Elements 09:00AM - 11:45AM
Presented by :
Robert Batterman, Speaker, University Of Pittsburgh This talk will look at a ubiquitous methodology in condensed matter physics and materials science that aims to understand bulk behaviors of many-body systems. The focus is on finding and characterizing structures that exist at scales in between the so-called fundamental or atomic, and that of the continuum. I will argue that such multiscale techniques provide justification and explanation for the continued use of effective theories in various theoretical contexts. My focus is on the role played by so-called “representative volume elements,” or RVEs, in homogenization theory. At everyday, continuum scales, a material like a steel beam looks reasonably homogenous. However, if we zoom in, we will begin to see structure that are hidden at everyday, naked-eye length scales. In order to model the main, important features of the piece of steel at these shorter length scales, scientists employ RVEs. RVEs are statistically representative of features of a material at some particular spatial scale. Importantly, RVEs (1) are scale-relative, that is, the actual characteristic lengths of the structures in an RVE can vary considerably, and (2) are always considered to be continua. These features of RVEs lead to both a unified methodological approach for modeling materials as varied as steel, wood, water, and gases; and, they provide methodological constraints that guide modeling strategies. I illustrate these features of RVEs by looking at examples where one can determine effective values for material parameters describing bulk behaviors. These include parameters like Young’s modulus for elastic materials and transport coefficients such as thermal and electrical conductivity. I will emphasize how little the values for these effective parameters depend on lowest scale/fundamental features of the systems; or, in other words, how effective parameters succeed in being autonomous from the fundamental features of the systems. Upper-scale phenomena of the sort I will consider in this talk often display a remarkable insensitivity to changes in lower-scale details. This is, of course, a hallmark of effective theories. Using these lessons from RVE modeling techniques, I will further discuss how reductive strategies, and those that emphasize the role of fundamentality in justifying the use of a multiscale modeling technique, ignore the autonomy of effective theories and why ignoring that autonomy inhibits multiscale modeling. When Scale Separation Fails: Multiscale Modeling of Nanoscale Nucleation 09:00AM - 11:45AM
Presented by :
Julia Bursten, University Of Kentucky
Zoe Simon, University Of Pittsburgh
Jill Millstone, Participant, University Of Pittsburgh This joint work between a philosopher and two chemists illustrates the practical scientific need for improved approaches to multiscale modeling. Nucleation models are essential to predicting the composition, structure, and growth rate, of nanoscale materials synthesis, and many models of nucleation exist. Classical nucleation models, for example, aim to predict the rate of nucleation based on the relationship between the thermodynamic properties of surface free energy and volume free energy. This model employs concepts that describe system-wide phenomena (e.g., surface area and volume) which treats nucleation as a bulk, continuous, and system-wide process. Other models of nucleation aim to describe the patterns of formation of the seeds or “nuclei” in the newly formed phase, predicting and explaining how a particular nucleus will grow and when it is more likely for a new nucleus to form vs. an existing nucleus to continue to grow. One example is the LaMer model, which predicts the homogeneous nucleation of a colloid based on the concentration of precursor over time. At low precursor concentrations, formation of nuclei is unfavorable and does not occur until the precursor reaches a critical concentration, at which point a burst nucleation event occurs. After this event monomer concentration is too low to continue nucleation and the nuclei enter the growth phase. These models employ concepts that describe the individual nuclei as individual solids, occasionally with internal structure of their own. Such models treat nucleation as an aggregate of individual microscale nucleation events. We argue that in bulk chemistry, physics, and materials science, the success of employing these different types of models jointly is due in part to the high degree of scale separation between the dynamics of the macroscale model and the dynamics of the microscale model. We contrast this rationale for the success of a modeling strategy with rationales that appeal to one or the other model being the more “fundamental” model of the system and with rationales that aim to draw a relationship of emergence between the two types of models. Then, we use Simon and Millstone’s research program in the thermodynamics and chemical kinetics of nanoparticle formation to raise a modeling challenge: how should nanochemists adapt nucleation models to the nanoscale? Nucleation plays a central role in the formation of a class of nanomaterial known as colloidal metal nanoparticles (CMNPs). Synthesizing CMNPs faces a variety of practical challenges related to the need to keep the particles from growing above the nanoscale, as well as the need to create a group of particles that are all of the same size, shape, and crystal structure. Solving these problems requires the use of nucleation models, but at the nanoscale, there is no longer the same degree of scale separation between macroscale and microscale nucleation models. We conclude by discussing what strategies are available to nanochemists for rationalizing the use of both types of nucleation models. Bridging Across Spatial and Temporal Scales: Optimization and Homogenization in Conservation Ecology 09:00AM - 11:45AM
Presented by :
Collin Rice, Assistant Professor Of Philosophy, Colorado State University In this paper, I use a number of examples of multiscale modeling in biology to argue that the primary challenge facing these modelers is not how to metaphysically interpret their models, but is instead using various idealizations to bring the available multiscale modeling techniques to bear on the phenomena of interest. This is particularly true when the aim of the biological modelers is to inform policy decisions concerning epidemics, climate change and conservation which involve a wide range of spatial and temporal scales. The ‘best-case scenario’ in these instances of multiscale modeling is when the dominant features of the system can be separated into distinct scales. When this occurs, scientists can effectively model the phenomenon by using modeling techniques designed for those particular scales (and type of processes). One example of this is attempting to use optimization techniques to model tradeoffs between the short-term and long-term adaptations of plants and animals to anthropogenic climate change. In these cases, biologists first model the adaptive strategies of individual plants and animals at shorter temporal scales (e.g. hours and days). They then model what would be adaptive for the overall population at longer temporal scales (e.g. generations). For example, when CO2 levels rise, in the short term, plants reduce their water use by reducing their stomatal conductance. However, at longer time scales, the models predict an increased water use due to increased photosynthetic capabilities, larger leaves, and deeper root depths. Finally, the optimal phenotypic trait (and the one we might want to use interventions to bring about) will be the one that best balances these short-term and long-term benefits. While scale separation is often useful for multiscale modelers, in many cases in the biological (and social) sciences, such a clean separation of scales is not possible. In these cases, biological modelers have begun borrowing various modeling techniques first deployed in physics. In particular modelers in spatial ecology have begun using 7 homogenization techniques to model plants and animals’ migration patterns across heterogeneous landscapes. In these cases, various idealizations are introduced to be able to model the system as a homogenous medium at the largest scale (e.g. the whole ecosystem) while taking into account the influences of variations at smaller scales (e.g. slower movement through mountains than fields). This results in a macroscale equation that encodes key features from smaller scales into its parameters and constants. This highly idealized modeling technique enables biologists to incorporate features from across a wide range of scales in much more computationally effective models that can more easily be used to inform policy decisions about the outcomes of specific interventions. What these two sets of cases show is the relationships between scales, the available modeling approaches, and scientists’ purposes for their models all influence which multiscale modeling technique is best suited to biologists’ goals and which idealizations can be justifiably used in order to deploy those modeling techniques. Multiscale Reasoning in Economics 09:00AM - 11:45AM
Presented by :
Jennifer Jhun, Reviewer, Duke There is a substantial amount of discussion in the philosophical literature on multiscale modeling in physical and material contexts. But there is less such discussion when it comes to the social sciences. This paper earmarks economics as a promising area for multiscale exploration for a number of reasons. First, there’s a sense in which things that go on at one scale on their own do not reduce to what goes on at another, though goings-on at one scale may contribute to goings-on at another. Formal results such as the Sonnenschein-Mantel-Debreu theorem and the questionable success of the microfoundations program provide at least two prima facie reasons we should be suspicious of reductionist attempts in economics. The next natural step would be to consider how it is that economists use multiple models together. Second, economists in actual practice, such as those at central banks, often really do use multiple models in order to achieve monetary policy aims. The literature is now over trod with commentary about the representational capacity of these (often idealized) models, so much so that not only is the discussion often quite divorced from economic practice, but it also makes the possibility of a realist interpretation of economics look rather tenuous. This paper shifts its attention to the role of models not just as representational devices, but also as devices for the construction of performative narratives. Models, when deployed in policy analysis, are part of the effort to construct coherent narratives that help guide action. This is an enterprise that often requires different parties to coordinate their expertise and resources. This role requires that models take on both the role of being explanatory — being able to be the kind of tool that people can use to ask what-if-things-had-been-different or why-questions — and performative — being the kind of tool that economists can use to effectively enact changes in the world. Doing so will highlight a number of features about economic models in particular that has escaped much of mainstream philosophical attention — for instance, why it might be that economic models furnish 6 how-actually (as opposed to merely how-possibly) explanation. The multiscale modeling framework, I propose, is one promising way of grounding this conception of model usage. To this end, I examine two case studies. One is explicitly multiscale: integrated computable general equilibrium and microsimulation modeling strategies, with particular attention to income distribution effects. The second is more implicit: I examine the process by which the U.S. Federal Reserve, via the construction of the Greenbook (now Tealbook) conditional forecasts which are then distributed at the Federal Open Market Committee meetings for discussion, offers policy recommendations. Finally, I suggest that these considerations actually point to a kind of pragmatic realism as the right attitude to have towards many economic models. | ||
09:00AM - 11:45AM Sterlings 3 | Measuring The Human: New Developments Speakers
Alessandra Basso, Speaker , University Of Cambridge
Markus Eronen, University Of Groningen
Cristian Larroulet Philippi, University Of Cambridge
Leah McClimans, Speaker, University Of South Carolina
Eran Tal, McGill University
Sebastian Rodriguez Duque, McGill University
David Sherry, Northern Arizona University
Moderators
Jacob Stegenga, University Of Cambridge Although measurement is widespread across the human sciences, the reliability of measurement in these disciplines is often contested. Philosophers of science have developed conceptual models for how measurement practice progresses in the natural sciences, highlighting in particular the virtuous co-development of theoretical understanding and measurement procedures. The extent to which these accounts of measurement are applicable outside the natural sciences, however, remains unclear. Measurement in the human sciences faces a number of specific challenges, which are related to the peculiarities of the phenomena under study. For instance, since nomological networks do not abound in the human sciences, measurement has less theoretical resources to draw from. Moreover, human scientists and philosophers of science debate about the very measurability of the complex and multidimensional properties of interest in these disciplines. Finally, many of the properties measured in the human sciences are value-laden and context-dependent and this raises questions about the possibility of having standardized measurements, which are valid across different contexts and distinct ethical grounds. In an attempt to enrich the philosophical accounts of measurement practice in the human sciences, this symposium addresses these challenges and evaluates scientists' strategies to deal with them. This symposium's participants include both early-career, mid-career, and more senior scholars, and major contributors to the philosophy of measurement. Fitness for purpose in psychometrics 09:00AM - 11:45AM
Presented by :
Sebastian Rodriguez Duque, McGill University Much recent philosophical attention has been given to the concept of validity in psychometrics (Alexandrova 2017; Angner 2013; McClimans 2010). By contrast, the question of whether and when a psychometric instrument is fit for its intended purpose has been largely neglected. Here we argue that fitness for purpose is a distinct feature of a psychometric measure that does not automatically follow from its validity, and is established by distinct sources of evidence. We focus on applications of psychometrics in healthcare, and specifically on the use of patient-reported outcome measures (PROMs) in mental healthcare. PROMs such as the Patient Health Questionnaire (PHQ-9) and the Kessler Psychological Distress Scale (K-10) are routinely used by mental health service providers for various purposes, including screening patients, assisting with diagnosis, recommending treatment plans, tracking patient progress, and assessing overall quality of care. Health outcomes researchers acknowledge that a PROM designed and validated for one purpose and population, such as screening in adults, may not be fit to serve another, such as tracking patient progress in youth. This context-sensitivity is partially due to differences in patient characteristics, and to the fact that different clinical decisions can require different kinds of evidence. Health outcomes researchers typically deal with this context-sensitivity by ‘re-validating’ PROMs against ‘gold standards’ of evidence, e.g., by adjusting the severity thresholds of a screening tool against the outcomes of clinical interviews in new settings. This paper argues that ‘re-validation’ techniques are inadequate for establishing fitness-for-purpose across contexts, because they are based on an overly narrow concept of fitness-for-purpose. Fitness-for-purpose in psychometrics is not only an epistemic criterion, but also an ethical criterion, namely, the condition of fit between the meanings and uses of a measure and the values and aims of stakeholders. Consequently, evaluating fitness-for-purpose requires a thorough examination of the ethics of measurement. We substantiate our claims with the results of a recent project in which we collaborated with psychometricians, clinicians, and young people. As part of this collaboration, philosophers of science helped develop a training in measurement for clinicians working at Foundry, a network of integrated mental health clinics for people aged 12-24 in British Columbia. Our research revealed a gap between psychometric evaluation techniques, which focus on statistical properties, and the need of clinicians and patients to identify measures that promote ethical and social values, such as inclusiveness, empowerment and collaboration. Our analysis highlights the need for a normative theory of measurement as a foundation for measure evaluation in psychometrics. Although some validation theorists have paid close attention to the ethics of measurement, they overemphasized the importance of avoiding negative social consequences. Building on McClimans’ (2010), we show that fitness-for-purpose is a stronger requirement than Messick’s ‘consequential validity’, and involves using measurement as a tool for genuine dialogue between clinician and patient. Measurement, Hermeneutics and Standardization 09:00AM - 11:45AM
Presented by :
Leah McClimans, Speaker, University Of South Carolina In contemporary philosophy of measurement prominent philosophers (van Fraassen 2008; Chang 2004; Tal 2011) have explicitly or implicitly recognized the role the hermeneutic circle plays in measurement. Specifically, they have recognized its role in what is sometimes referred to as the “coordination problem”. Yet in these accounts the hermeneutic aspect of measurement is often minimized, giving way to standardization, modeling and other concerns. In this essay I discuss the tension between the hermeneutics and standardization of measurement and offer an alternative account of measurement. In my account, the hermeneutic circle is the constant companion of measurement with standardization making time limited appearances. The coordination problem asks how we imbue our measuring instruments with empirical significance. In other words, how do we coordinate our measuring instruments with the phenomena we want them to assess? In the empirical literature on measurement, the coordination problem is sometimes discussed in terms of validity, i.e. ensuring a measuring instrument measures what it intends to measure. The problem associated with coordination (or validity) is that it confronts a circle: If I want to know if my measuring instrument does a good job of capturing the phenomena of interest--say temperature or humidity or quality of life--then it seems that I need to know already a great deal about temperature, humidity or quality of life. I need to know, for instance, how temperature fluctuates across locations or people at a single point in time, or how quality of life changes with disease trajectory. Yet this information is precisely what the measuring instrument is designed to provide. So, how can we ever coordinate our instruments? To answer this question, I examine Hasok Chang’s discussion of coherentism in measurement. As I will illustrate, his proposal has much in common with philosophical hermeneutics (Gadamer, 2004), nonetheless, it emphasizes the stabilization of the hermeneutic circle over time. We might think of this stabilization as a point in time when we know enough about the phenomena of interest such that all the questions we want to ask (for a particular purpose) are answered by the measuring instrument. Once we reach stabilization, if the measuring instrument gives us an answer we don’t expect, we tend to call it error or bias. Achieving stability usually means that the phenomena of interest can be standardized, and at least for some metrologists, measurement has been achieved. Yet when we look closer, standards get revised, some phenomena are never standardized, some measures are never stabilized, and questions of coordination continue to haunt measurement well-beyond their sell-by date. What is going on? I suggest that the quintessence of measurement is not standardization, but rather hermeneutic dialogue. Sometimes this dialogue becomes stagnant, stability and standardization ensue. But this is the exception and not the rule. Indeed, scientific progress relies on it. Is Measurement in the Social Sciences Doomed? A Response to Joel Michell 09:00AM - 11:45AM
Presented by :
Cristian Larroulet Philippi, University Of Cambridge Whether widely used measures in the human sciences—e.g., measures of intelligence, happiness, empowerment, depression, etc.—count as quantitative remains a battlefront. Practitioners commonly analyze their data assuming that their measures are quantitative, but many methodologists reject this presupposition. Other authors acknowledge that current measures might not be strictly quantitative, but taking inspiration from recent philosophy of measurement, they express optimism about future human science measurements. Is the optimism of the latter camp warranted? Joel Michell’s more recent work (2012) provides reasons to the contrary. He argues not only that current measures aren’t quantitative, but that the attributes at stake (intelligence, etc.) are themselves not quantitative. Hence, these attributes cannot (thus, will not) afford quantitative measurement. Michell’s influential argument draws from a long tradition (including von Kries and Keynes). But I focus on Michell’s argument, because its scope is wider. My goal is to demonstrate his argument fails in showing that common human science attributes are not-quantitative. Michell argues that the key feature indicating that attributes are not quantitative is their lack of “pure homogeneity.” When we consider the different degrees of some quantity—e.g., 3cms, 4cms, and 6cms—we realize that they are all degrees of the very same kind; they differ only quantitatively, but not qualitatively. The same is true for the differences between these degrees—e.g., the interval between 6cms and 4cms and the interval between 4cms and 3cms don’t differ in kind, their only difference is that the former is twice the latter. In contrast, in not-quantitative (but ordinal) attributes, says Michell, we don’t observe proper homogeneity: although we can order different degrees, we cannot order the differences between these degrees. Crucially, Michell’s point (in this more recent work) is not epistemic. His claim is that these differences do not stand in ordering relations because the attribute is not purely homogeneous (i.e. the differences between degrees are qualitatively different). Michell believes common attributes in the human sciences are heterogeneous in this sense. He illustrates the argument with the attribute ‘functional independence’. He considers a typical scale for measuring functional independence, and concludes that functional independence is merely ordinal since the differences between degrees indicated in the scale are qualitatively different. However, Michell’s argument misses the mark. We should distinguish between the actual target of our measurements—the theoretical attribute to be measured, the ‘measurand’—and the (empirically accessible) measuring attribute we use to infer values of the measurand. This distinction and the working assumption that (some) measurands are quantitative lie behind psychometricians’ understanding of measurement that Michell targets. Those assumptions are also part of influential contemporary accounts of measurement such as Eran Tal’s and David Sherry’s. Yet Michell’s argument overlooks this distinction, conflating the measurand with the measuring attribute—Michell’s argument only demonstrates heterogeneity in the scale for measuring functional independence, leaving open whether functional independence itself is heterogeneous. I show that the former doesn’t entail the latter and suggest that this generalizes to other attributes. Theory and measurement in psychology 09:00AM - 11:45AM
Presented by :
Markus Eronen, University Of Groningen In recent years, more and more authors have called attention to the fact that the theoretical foundations of psychology are shaky. This has led to a lively debate on the “theory crisis” in psychology, which is argued to be more fundamental than the replication crisis that has received much more attention. In this talk, I first consider why there are so few good theories in psychology, and why psychology differs in this respect from other fields, and then argue that the lack of good psychological theories also creates fundamental challenges to psychological measurement. First, there has been insufficient attention on the conceptual clarity of psychological constructs. The same construct is often operationalized in wildly divergent ways in different fields, or different constructs are created for the same underlying phenomenon. For example, there are over 30 different constructs related to “perceived control”. The result is that psychology is permeated with numerous constructs and concepts of insufficient clarity, which is a problem for theory construction, as concepts are the building blocks of theories. Moreover, this lack of conceptual clarity is also closely linked to problems of psychological measurement: It is hard to provide valid measurements of constructs that are not well defined, as the discussion on the measurement of happiness and well-being illustrates. Strikingly, most studies in psychology report little or no validity evidence whatsoever for the constructs used. Second, psychological states are difficult to directly intervene on, and effects of interventions are hard to reliably track, which poses great challenges for establishing psychological causes or mechanisms. More specifically, interventions on psychological variables such as affects states or symptoms are not “surgical” but “fat-handed” in the sense that they change several variables at once. This makes it extremely difficult to infer causal relationships between psychological variables, and insofar as theories should track causal relationships, this hinders the development of good psychological theories. In addition, it is widely thought that valid measurement requires establishing a causal relationship between the attribute that is measured (e.g., temperature) and the measurement outcome (e.g., thermometer readings). Insofar as this is the case, the problem of psychological interventions is also directly a problem for psychological measurement. In light of these issues, it is understandable that psychological theories tend to come and go, without much cumulative progress, and that the very possibility of psychological measurement continues to be debated. However, I will end the talk on a positive note, considering some ways of making progress in psychology: Focusing more on conceptual clarification instead of just statistics and experiments; and embracing a holistic and pragmatic approach, where measurement, theorizing, and conceptual clarification are seen as necessary parts of an ongoing iterative cycle. Concepts of inequality and their measurement 09:00AM - 11:45AM
Presented by :
Alessandra Basso, Speaker , University Of Cambridge Inequality measurements are widely used by scientists and policy makers. Social scientists use them to analyze the global distribution of income and trends over time. In policymaking, inequality measurements contribute to inform redistributive policies at national level, and to set the agenda for international development and foreign aid. Inequality measurements are expected to objectively arbitrate in the design, selection and implementation of policy in these areas. The measurement of inequality, however, is far from straightforward, and scientists disagree about what is the best way to conceptualize inequality and what is the most appropriate method for measuring it. As a result, the policies based on these measurements are also called into question. One of the main questions is what exactly should be measured. While measurements typically focus on income or wealth inequality, there is increasing awareness that inequality is multidimensional and that other aspects of people’s well-being (like health, education, and political freedoms) should be measured too. Moreover, scientists have stressed the importance of measuring the inequality of opportunities rather than looking merely at the inequality of outcomes, and highlighted the relevance of investigating people’s subjective perception of economic disparities for designing successful inequality-reducing policies. The problem is that no measurement can take into account all aspects of inequality at the same time, and scientists disagree about which aspects should be taken into account and why. Measurement practice requires scientists to find context-dependent compromises between conceptual and procedural desiderata. As a consequence, scientific practice relies on a variety of narrow, contextual concepts, but this raises questions for using the outcomes of these measurements outside the narrow scope for which they were initially designed. This paper investigates how these narrow concepts of inequality are related to each other, and to the broader and multidimensional notions implementers are interested in. By looking in particular at the relations between subjective and objective measurements of inequality, I highlight the challenges that arise when investigating the relations between inequality parameters that are measured using different methodologies. However, I also defend the idea that conceptual analogies can be used to establish high-order relations between distinct dimensions of inequality. While inequality is measured differently across contexts, there is a sense in which these are all related to a common underlying concept of inequality, which can provide the basis for comparison and aggregation. This highlights the need for deeper theoretical understanding of how multiple dimensions are related to each other. | ||
10:00AM - 10:15AM Virtual Room | Coffee Break | ||
11:45AM - 01:15PM Virtual Room | Lunch Break (Interest Groups) Interest Group Lunch - Please note that these lunches are not subsidized by the PSA and do require prior registration to attend. Click to register.How to change the world with "engaged philosophy of science" – Nov 12Host: Michiru NagatsuLocation: Buon Giorno CaféCapacity: 10 | ||
11:45AM - 01:15PM Kings 3, 4 | Diversity Equity & Inclusion Caucus Business Meeting and Lunch | ||
01:15PM - 03:15PM Benedum | Machine Learning Speakers
Faron Ray, Graduate Student, University Of California, San Diego
Heather Champion, Western University
Phillip Kieval, PhD Student, University Of Cambridge
Eamon Duede, Presenter , University Of Chicago
Moderators
Kathleen Creel , Assistant Professor, Northeastern University Two Types of Explainability for Machine Learning Models 01:15PM - 03:15PM
Presented by :
Faron Ray, Graduate Student, University Of California, San Diego This paper argues that there are two different types of causes that we can wish to understand when we talk about wanting machine learning models to be explainable. The first are causes in the features that a model uses to make its predictions. The second are causes in the world that have enabled those features to carry out the model’s predictive function. I argue that this difference should be seen as giving rise to two distinct types of explanation and explainability and show how the proposed distinction proves useful in a number of applications. Machine-led Exploratory Experiment in Astrophysics 01:15PM - 03:15PM
Presented by :
Heather Champion, Western University The volume and variety of data in astrophysics creates a need for efficient heuristics to automate the discovery of novel phenomena. Moreover, data-driven practices suggest a role for machine-led exploration in conceptual development. I argue that philosophical accounts of exploratory experiments should be amended to include the characteristics of cases involving machine learning, such as the use of automation to vary experimental parameters and the prevalence of idealized and abstracted representations of data. I consider a case study that applies machine learning to develop a novel galaxy classification scheme from a dataset of ‘low-level’ but idealized observables. Automated Discoveries, Understanding, and Semantic Opacity 01:15PM - 03:15PM
Presented by :
Phillip Kieval, PhD Student, University Of Cambridge I draw attention to an under-theorized problem for the application of machine learning models in science, which I call semantic opacity. Semantic opacity occurs when the knowledge needed to translate the output of an unsupervised model into scientific concepts depends on theoretical assumptions about the same domain of inquiry into which the model purports to grant insight. Semantic opacity is especially likely to occur in exploratory contexts, wherein experimentation is not strongly guided by theory. I argue that techniques in explainable AI (XAI) that aim to make these models more interpretable are not well suited to address semantic opacity. Deep Learning Opacity in Scientific Discovery 01:15PM - 03:15PM
Presented by :
Eamon Duede, Presenter , University Of Chicago Philosophical concern with epistemological challenges presented by opacity in deep neural networks does not align with the recent boom in optimism for AI in science and recent scientific breakthroughs driven by AI methods. I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science. I argue, using cases from the scientific literature, that examination of the role played by deep learning as part of a wider process of discovery shows that epistemic opacity need not diminish AI’s capacity to lead scientists to significant and justifiable breakthroughs. | ||
01:15PM - 03:15PM Fort Pitt | Explanation Speakers
Ellen Lehet, Utah Valley University
William D'Alessandro, Postdoctoral Fellow, Munich Center For Mathematical Philosophy, LMU Munich
Haomiao Yu, University Of Guelph
Samantha Wakil, Scientific Writer And Advisor , Biophilia Partners
Nick Byrd, Stevens Institute Of Technology
Jack Justus, Presenter, Florida State University
Moderators
Michael Strevens, New York University Understanding Abstracting Structural Explanations 01:15PM - 03:15PM
Presented by :
Daniel Wilkenfeld, Speaker, University Of Pittsburgh In the literature on explanation, philosophers have proposed different conceptions of structural explanations. In this paper, I explore how several seemingly disparate accounts of structural explanation can be tied together with a central notion of abstraction borrowed from the philosophy of mathematics. Some explanations involve abstracting from a subject as an individual to seeing that individual as a node in a network of explanatory relations. I will then tie this account of structural explanation by abstraction to Wilkenfeld’s (2019) account of understanding to show how this class of structural explanations by abstraction is understanding-conducive. Mathematical Explanation and Understanding: A Noetic Account 01:15PM - 03:15PM
Presented by :
William D'Alessandro, Postdoctoral Fellow, Munich Center For Mathematical Philosophy, LMU Munich
Ellen Lehet, Utah Valley University Abstract We defend a noetic account of intramathematical explanation. On this view, a piece of mathematics is explanatory just in case it produces an appropriate type of understanding. We motivate the view by presenting some appealing features of noeticism. We then discuss and criticize the most prominent extant version of noeticism, due to Matthew Inglis and Juan-Pablo Mejía-Ramos, which identifies explanatory understanding with the possession of detailed cognitive schemas. Finally, we present a novel noetic account. On our view, explanatory understanding arises from meeting specific explanatory objectives, the theory of which we briefly set out. A Virtue Epistemology of Scientific Explanation and Understanding 01:15PM - 03:15PM
Presented by :
Haomiao Yu, University Of Guelph In this paper, I aim to develop a virtue epistemological account of scientific explanation and understanding. In so doing, I build a link between intellectual virtue and scientific explanation through understanding. The central epistemological question I will focus on is how human beings understand the world through scientific explanation. The answer I will give is that our understanding of the world is achieved by the alignment of intellectual virtue and explanation structure. The Allure of Simplicity: Framing Effects and Theoretical Virtues 01:15PM - 03:15PM
Presented by :
Samantha Wakil, Scientific Writer And Advisor , Biophilia Partners
Nick Byrd, Stevens Institute Of Technology
Jack Justus, Presenter, Florida State University Once acquired, status as a theoretical virtue is rarely lost. But recent philosophical criticisms of parsimony argue its scope is rather limited and its justificatory basis quite thin. In fact, psychological studies cast a pall on positive assessments of parsimony. They show simplicity considerations frequently derail correct probabilistic inference. We investigate another way theoretical virtues may corrupt reasoning: the terms labeling theoretical virtues themselves produce a framing effect. Our results show merely labeling explanations “simple” exacerbated participants’ neglect of base-rate information. This finding complements recent empirically-oriented criticisms and broaches underexplored issues about whether “theoretical virtues” involve false advertising. | ||
01:15PM - 03:15PM Smithfield | Philosophy of Social Sciences Speakers
Philippe Van Basshuysen, Leibniz University Hannover
Craig Callender, University Of California, San Diego
Armin Schulz, Professor & Director Of Undergraduate Studies, University Of Kansas
Adrian K. Yee, PhD Student, University Of Toronto
Moderators
Mathias Frisch, Presenter, Leibniz Universität Hannover Austinian model evaluation 01:15PM - 03:15PM
Presented by :
Philippe Van Basshuysen, Leibniz University Hannover Like Austin’s “performatives”, some models are used not merely to represent, but also to change their targets. This paper argues that Austin’s analysis can inform model evaluation: if models are evaluated with respect to whether they are adequate for particular purposes (Parker 2020), and if performativity can sometimes be regarded as a model purpose (a proposition that is defended, using mechanism design as an example), it follows that these models can be evaluated with respect to their “felicity”, i.e. whether their use has achieved this purpose. Finally, I respond to epistemic and ethical concerns that might block this conclusion. Does Temporal Neutrality Imply Exponential Temporal Discounting? 01:15PM - 03:15PM
Presented by :
Craig Callender, University Of California, San Diego How should one discount utility across time? The conventional wisdom in social science is that one should use an exponential discount function. Such a function is a representation of the axioms that provide a well-defined utility function plus a condition known as stationarity. Yet stationarity doesn’t really have much intuitive normative pull on its own. Here I try to cast it in a normative glow by deriving stationarity from two explicitly normative premises, both suggested by the philosophical thesis of temporal neutralism. Putting the argument in this form helps us better understand exponential discounting and challenges to it. Equilibrium Modeling in Economics: A Design-Based Defense 01:15PM - 03:15PM
Presented by :
Armin Schulz, Professor & Director Of Undergraduate Studies, University Of Kansas Several authors have recently argued that the overly strong focus on equilibrium models in mainstream economic analysis prevents economists from providing accurate representations of the complex and dynamic nature of real economic systems. In response, this paper shows that, since many economic systems are the products of more or less deliberate and centralized human design, there are reasons to think that many economic systems are, in fact, often well represented with equilibrium models. People can and do build and support social institutions so that they create predictable economic systems—i.e., ones that have stable equilibria. Measuring Information Deprivation: A Democratic Proposal 01:15PM - 03:15PM
Presented by :
Adrian K. Yee, PhD Student, University Of Toronto There remains no consensus amongst social scientists as to how to quantify and understand forms of information deprivation such as misinformation. Measures of information deprivation typically employ a deficient conception of truth that should be replaced with measurement methods grounded in certain idealized norms of agreement about what kind of information ecosystem a society’s participants wish to live in. A mature science of information deprivation should include considerable democratic involvement that is sensitive to the value-ladenness of information quality and that doing so may enhance the predictive and explanatory power of models of information deprivation. | ||
01:15PM - 03:15PM Birmingham | Values and Policy Speakers
Marina DiMarco, University Of Pittsburgh HPS
Jamie Shaw, University Of Toronto
Steve Elliott, AAAS Science And Technology Policy Fellow , NOAA
Richard Sung, KAIST
Moderators
Roberta Millstein, University Of California, Davis Cooperative Epistemic Trustworthiness 01:15PM - 03:15PM
Presented by :
Marina DiMarco, University Of Pittsburgh HPS Extant accounts of trust in science focus on reconciling scientific and public value judgments, but neglect the challenge of learning audience values. I argue that for scientific experts to be epistemically trustworthy, they should adopt a cooperative approach to learning about the values of their audience. A cooperative approach, in which expert and non-expert inquirers iteratively refine value judgments, better achieves important second-order epistemic dimensions of trustworthiness. Whereas some epistemologists take trustworthiness to be a precondition for the objectivity of science, I suggest that strong objectivity in the standpoint theoretic sense is sometimes a prerequisite for trustworthiness itself. Peer Review, Innovation, and Predicting the Future of Science: The Scope of Lotteries in Science Funding Policy 01:15PM - 03:15PM
Presented by :
Jamie Shaw, University Of Toronto Recent science funding policy scholars and practitioners have advocated for the use of lotteries, or elements of random chance, as supplementations of traditional peer review for evaluating grant applications. One of the primary motivations for lotteries is their purported openness to innovative research. The purpose of this paper is to argue that current proponents of funding science by lottery overestimate the viability of peer review and thus unduly restrict the scope of lotteries in science funding practice. I further show how this analysis suggests a different way of introducing lotteries into science funding policy. Institutional Values Influence the Design and Evaluation of Transition Knowledge in Funding Proposals at NOAA 01:15PM - 03:15PM
Presented by :
Steve Elliott, AAAS Science And Technology Policy Fellow , NOAA This paper shows how institutional values influence the design and evaluation of arguments in funding proposals for scientific research. We characterize a general argument made within proposals and several kinds of subarguments that contribute to it. We indicate that funders’ values inform the kinds of proposal documents funders require and their relative weighting of them. We illustrate these points by showing how the U.S. federal agency NOAA uses its public service mission to require and heavily weigh arguments to transition new knowledge to agency service providers. We suggest conceptual questions raised by the use of transition arguments. Against Evidentiary Pluralism in Pharmaceutical Regulation 01:15PM - 03:15PM
Presented by :
Richard Sung, KAIST We examine arguments for and against the use of mechanistic evidence in assessing treatment efficacy and find that advocates of EBM+ and their critics have largely been talking past each other due to difference in focus. We explore aducanumab for the treatment of Alzheimer’s disease as a case which may (eventually) speak to the role of EBM+ in pharmaceutical regulation. The case suggests the debate may be more fruitful if philosophers confine debates to particular domains of medical science and weigh in prospectively instead of relying on historical cases where outcomes are known and which are susceptible to hindsight bias. | ||
01:15PM - 03:15PM Sterlings 1 | Realism and Kinds Speakers
Philipp Haueis, Presenter, Bielefeld University
Simon Allzén, Stockholm University
Greg Frost-Arnold, Hobart And William Smith Colleges
Tushar Menon, University Of Cambridge
Moderators
P.D. Magnus, University At Albany, State University Of New York Revising scientific concepts with multiple meanings: beyond pluralism and eliminativism 01:15PM - 03:15PM
Presented by :
Philipp Haueis, Presenter, Bielefeld University In the recent debate about scientific concepts, pluralists claim that scientists can legitimately use concepts with multiple meanings, while eliminativists argue that scientists should abandon such concepts in favor of more precisely defined subconcepts. While pluralists and eliminativists already share key assumptions about conceptual development, their normative positions still appear to suggest that the process of revising concepts is a dichotomous choice between keeping the concept and abandoning it altogether. To move beyond pluralism and eliminativism, I discuss three options of revising concepts in light of new findings, and when scientists should choose each of them. Modest Scientific Realism and Belief in Astronomical Entities 01:15PM - 03:15PM
Presented by :
Simon Allzén, Stockholm University One of the core charges against explanationist scientific realism is that is too epistemically optimistic. Taking the charge seriously, alternative forms of scientific realism -- semi-realism and theoretical irrealism -- are designed to be more modest in their epistemic claims. I consider two cases in cosmology and astrophysics that raises novel issues for both views: the cosmic event horizon inverts important tenets in semi-realism ; theoretical irrealism appears incompatible with standard evidential reasoning in the context of the dark matter problem. The No-Miracles Argument Does Not Commit the Base-Rate Neglect Fallacy—but it still needs work 01:15PM - 03:15PM
Presented by :
Greg Frost-Arnold, Hobart And William Smith Colleges Certain philosophers claim the No-Miracles Argument (NMA) for realism commits the base-rate neglect fallacy. I argue that it does not. In general, one commits a base-rate fallacy only when one has access to the relevant base rate. And in the case of scientific realism, we lack access to the relevant base rate. The most natural attempt to save the base-rate objection from this reply leads to unwelcome consequences. However, this dialectic leads to a legitimate concern about the NMA. I conclude by sketching a new type of No-Miracles-style argument, which avoids this concern. We don't talk about grue, no 01:15PM - 03:15PM
Presented by :
Tushar Menon, University Of Cambridge Motivated by a desire to make precise the position of structural realism, David Wallace recently articulated a framework for understanding the relationship between metaphysical and scientific theories. In this paper, I demonstrate two particularly significant consequences of this framework for the metaphysics of science, unrelated to considerations from structural realism: that it underwrites resolutions to both Putnam's (permutation) paradox and the new riddle of induction. | ||
01:15PM - 03:15PM Forbes | Game Theory Speakers
Daniel A. Herrmann, University Of California, Irvine
Jacob VanDrunen, University Of California, Irvine
Emily Heydon, UC Irvine
Travis LaCroix, Dalhousie University
Simon Huttegger, Presenter, University Of California Irvine
Moderators
Shimin Zhao, University Of Wisconsin - Madison Naturalizing Natural Salience 01:15PM - 03:15PM
Presented by :
Jacob VanDrunen, University Of California, Irvine
Daniel A. Herrmann, University Of California, Irvine In the paradigm of Lewis-Skyrms signaling games, the emergence of linguistic conventions is a matter of equilibrium selection. What happens when an equilibrium has ``natural salience'' -- stands out as uniquely attractive to the players? We present two models. We find that the dynamics of natural salience can encourage the learning of more successful signaling conventions in some contexts, and discourage it in others. This reveals ways in which the supposed worst-case scenario -- a lack of natural salience -- might be better than some cases in which natural salience is present. Exploring an Evolutionary Paradox: An Analysis of the "Spite Effect" and the "Nearly Neutral Effect" in Synergistic Models of Finite Populations 01:15PM - 03:15PM
Presented by :
Emily Heydon, UC Irvine Forber and Smead (2014) analyze how increasing the fitness benefits associated with prosocial behavior can increase the fitness of spiteful individuals relative to their prosocial counterparts, so that selection favors spite over prosociality. This poses a problem for the evolution of prosocial behavior: as the benefits of prosocial behavior increase, it becomes more likely that spite, not prosocial behavior, will evolve in any given population. In this paper, I develop two game-theoretic models which, taken together, illustrate how synergistic costs and benefits may provide partial solutions to Forber and Smead’s paradox. Information and Meaning in the Evolution of Compositional Signals 01:15PM - 03:15PM
Presented by :
Travis LaCroix, Dalhousie University This paper provides a formal treatment of the argument that syntax alone cannot give rise to compositionality in a signalling game context. This conclusion follows from the standard information-theoretic machinery used in the signalling game literature to describe the informational content of signals. Superconditioning 01:15PM - 03:15PM
Presented by :
Simon Huttegger, Presenter, University Of California Irvine A well-known result by Diaconis and Zabell examines when a shift from a prior to a posterior can be represented by conditionalization. This paper extends their result and connects it to the reflection principle and common priors. A shift from a prior to a set of posteriors can be represented within a conditioning model if and only if the prior and the posteriors are satisfy a form of the reflection principle. Common priors can be characterized by principles that require distinct sets of posteriors to cohere. These results have implications for updating, game theory, and time-slice epistemology. | ||
01:15PM - 03:15PM Sterlings 2 | Philosophy of Biology Speakers
Katie Morrow, Postdoc, Universität Bielefeld
Hannah Allen, University Of Utah
Margaret Farrell, Graduate Student, University Of California, Irvine
Katherine Deaven, University Of Wisconsin-Madison
Moderators
Monika Piotrowska, University At Albany, State University Of New York A Constructivist Account of the Ecosystem Health Concept 01:15PM - 03:15PM
Presented by :
Katie Morrow, Postdoc, Universität Bielefeld I develop a constructivist account of the ecosystem health concept. I argue that plausible naturalist accounts of ecosystem health are unsuccessful—they do not accurately track and explain common judgments about ecosystem health. I show that specific values pertaining to aesthetics, authenticity, and human wellbeing help explain contemporary judgments about ecosystem health. Some implications of this position are that empirical research on ecological health is importantly value-laden; that judgments about ecological health are sensitive to anthropocentric preferences; and that ecosystem health is not a nonanthropocentric management target. Genomics in the Age of Population Health 01:15PM - 03:15PM
Presented by :
Hannah Allen, University Of Utah Racial health disparities are a pervasive problem in the United States. While these disparities are mainly due to structural racism, genomicists have been attempting to find a genetic cause to these largely social problems. Disparities in disease rate and outcomes for asthma and diabetes present opportunities for analyzing the different techniques embodied by genomicists and population health experts. In this paper, I carve out a middle path between genomicists and their critics, highlighting drug disparities, a small subset of larger racial disparities, which might be targeted through pharmacogenomics. Causal Selection in Context: Explaining Gene Centrism 01:15PM - 03:15PM
Presented by :
Margaret Farrell, Graduate Student, University Of California, Irvine There are two problems in the history and philosophy of genetics that seem to be related. One is the problem of causal selection in cellular and developmental processes. The other is the general approach of seeking genetic explanations, characterized by Ken Waters (2006) as ‘gene centrism.’ I argue that to understand this connection, we must consider the proximity of explanatory targets to DNA sequence. While the success of the genetic approach for proximate explanatory targets may be explained by the causal properties of DNA, its success for distal targets is better explained by the availability of the genetic framework itself. Reference Class Choice and the Evolution of Senescence 01:15PM - 03:15PM
Presented by :
Katherine Deaven, University Of Wisconsin-Madison Scientists engage in relative significance controversies when they investigate the importance of a cause in producing a phenomenon of interest. In order to engage in these controversies, however, a reference class must be specified. In what follows, I explore how the problem of reference class choice arises in controversies in evolutionary biology. Then, I describe different approaches to justifying choice in reference class. Finally, I explore how the problem of reference class has hindered research on the evolution of senescence and suggest some ways in which progress may be made. | ||
01:15PM - 03:15PM Sterlings 3 | Neuroscience and Cognitive Science Speakers
Lotem Elber Dorozko, Post Doc, University Of Pittsburgh
Nicholas Shea, Institute Of Philosophy, University Of London
Caleb Dewey, Presenter, University Of Arizona
J.P. Gamboa, University Of Pittsburgh HPS
Moderators
David Colaço, Presenter, LMU Munich Can neuroscientists ask the wrong questions? On why etiological considerations are essential when modeling cognition 01:15PM - 03:15PM
Presented by :
Lotem Elber Dorozko, Post Doc, University Of Pittsburgh Commonly in neuroscientific research today, scientists build models that can perform cognitive capacities and compare their activity with neuronal activity, with the purpose of learning about brain computations. These models are constrained only by the task they must perform. Therefore, it is a worthwhile scientific finding that the workings of these models are similar to neuronal activity. This is a promising method to understanding cognition. However, I argue that it is likely to succeed in explaining how cognitive capacities are performed only when the capacities’ etiology is considered while choosing the modeled capacities. Otherwise, it may lead scientific practice astray. Organized Representations Forming a Computationally Useful Processing Structure 01:15PM - 03:15PM
Presented by :
Nicholas Shea, Institute Of Philosophy, University Of London Godfrey-Smith recently introduced the idea of representational ‘organization’. Representations from an organized family are tokened on different occasions and systematically interrelated (eg. analogue magnitude representations). Organization has been elided with structural representation, but the two are in fact distinct. An under-appreciated merit of representational organization is the way it facilitates computational processing. When representations from different organized families interact, they form a processing structure. These processing structures can be computationally useful. Many of the cases where organization has seemed significant, but which fall short of structural representation, are cases where representational organization underpins a computationally useful processing structure. Inter-level explanations for behavioural circuits 01:15PM - 03:15PM
Presented by :
Caleb Dewey, Presenter, University Of Arizona Behavioural systems present a relevance problem: there’s too much information about them to include all of it in our explanations, so we must decide what information should be included in and excluded from explanation. One popular solution is to (a) include only information at a single level and (b) exclude information at other levels. This excludes relevant information about interlevel processes. However, neuroscientists have found an implicit way to include relevant interlevel processes in their explanations of behavioural circuits. I argue that we can rationally reconstruct an interlevel theory of relevance for behavioural systems from their explanations of behavioural circuits. Thinking About Circuits 01:15PM - 03:15PM
Presented by :
J.P. Gamboa, University Of Pittsburgh HPS Terminological inconsistency in neuroscience obscures ontological relations between neural circuits and cognition. Meanwhile, the dominant view among philosophers is that human cognition is neurally realized. It remains an open question whether the extensive philosophical literature on (multiple) realization sheds light on the ontological unclarity in neuroscience. Here I identify the kinds of experiments in neuroscience that are relevant to determining whether and how cognition is neurally realized. I then argue on empirical grounds that realization is not a relation between individual circuits and cognitive phenomena. | ||
01:15PM - 03:15PM Duquesne | Mathematics Speakers
David Marshall Miller, Auburn University
Jared Ifland, Graduate Student And Teaching Assistant, Florida State University
Andre Curtis-Trudel, Post-doctoral Fellow, Lingnan University
Thomas Barrett, UC Santa Barbara
Moderators
Tyler Hildebrand, Dalhousie University When Mathematics Became Useful to Science 01:15PM - 03:15PM
Presented by :
David Marshall Miller, Auburn University Mathematics is the “language of nature,” a privileged mode of expression in science. We think it latches onto something essential about the physical universe, and we seek theories that reduce phenomena to mathematical laws. Yet, this attitude could not arise from the philosophies dominant before the early modern period. In orthodox Aristotelianism, mathematical categories are too impoverished to capture the causal structure of the world. In the revived Platonism of its opponents, the natural world is too corrupt to exemplify mathematical perfection. Modern mathematical science required a novel tertium quid, due to Pietro Catena. Realism on Thin Ice: An Argument from Mathematical Practice 01:15PM - 03:15PM
Presented by :
Jared Ifland, Graduate Student And Teaching Assistant, Florida State University In Defending the Axioms: On the Philosophical Foundations of Set Theory, Penelope Maddy introduces two methodologically equivalent but philosophically distinct positions, termed Thin Realism and Arealism, which presumably respect set-theoretic practice. Further, Maddy concludes that for her idealized naturalistic inquirer, there is no substantive difference between the two positions. However, I argue that Thin Realism loses its tenability due to the presence of foundational pluralism in broader mathematical practice. In turn, this presents a naturalistic way to undermine Maddy’s conclusion that these are two equally admissible ways of describing the underlying constraints of mathematical practice for the philosophical naturalist. Mathematical Explanation in Computer Science 01:15PM - 03:15PM
Presented by :
Andre Curtis-Trudel, Post-doctoral Fellow, Lingnan University This note scouts a broad class of explanations of central importance to contemporary computer science. These explanations, which I call 'limitative' explanations, explain why certain problems cannot be solved computationally. Limitative explanations are philosophically rich, but have not received the attention they deserve. The primary goals of this note are to isolate limitative explanations and provide a preliminary account of what makes them explanatory. On the account I favour, limitative explanations are best understood as non-causal mathematical explanations which depend on highly idealized models of computation. On Automorphism Criteria for Comparing Amounts of Mathematical Structure 01:15PM - 03:15PM
Presented by :
Thomas Barrett, UC Santa Barbara Wilhelm (2021) has recently defended a criterion for comparing struc- ture of mathematical objects, which he calls Subgroup. He argues that Subgroup is better than SYM∗, another widely adopted criterion. We argue that this is mistaken; Subgroup is strictly worse than SYM∗. We then formulate a new criterion that improves on both SYM∗ and Sub- group, answering Wilhelm’s criticisms of SYM∗ along the way. We con- clude by arguing that no criterion that looks only to the automorphisms of mathematical objects to compare their structure can be fully satisfactory. | ||
01:15PM - 03:15PM Board Room | Philosophy of Quantum Mechanics 2 Speakers
David Schroeren, University Of Geneva
Jer Steeger, Postdoctoral Scholar, University Of Washington
Alexander Meehan, Postdoctoral Associate, Yale University
Samuel Fletcher, University Of Minnesota
Ryan Miller, University Of Geneva
Moderators
Francisco Pipa, Department Of Philosophy, University Of Kansas Quantum permutations are not qualitative isomorphisms (and what this tells us about haecceitism) 01:15PM - 03:15PM
Presented by :
David Schroeren, University Of Geneva Permutations play an important role in both metaphysics and philosophy of physics: metaphysicians are interested in how (if at all) possible worlds are affected by permutations of the objects that inhabit those worlds; philosophers of physics are interested in how (if at all) permutations affect physical states of quantum systems. In the literature on the metaphysical implications of permutation invariance in quantum mechanics, it is standard to identify the two. In this paper, I argue that this identification is mistaken and investigate the metaphysical consequences of this conclusion. Learning in a Quantum World: Quantum Conditionalization and Quantum Accuracy 01:15PM - 03:15PM
Presented by :
Alexander Meehan, Postdoctoral Associate, Yale University
Jer Steeger, Postdoctoral Scholar, University Of Washington A core tenet of Bayesian epistemology is that rational agents update by Bayesian conditionalization. Accuracy arguments in favor of this norm are well-known. Meanwhile, in the setting of quantum probability and quantum state estimation, multiple updating rules have been proposed, all of which look prima facie like analogues of Bayesian conditionalization. These include Luders conditionalization, retrodiction, and Bayesian mean estimation (BME). In this paper, we present expected-accuracy and accuracy-dominance arguments for Luders and BME, which we show are complementary rules. Retrodiction, on the other hand, is shown to be accuracy-dominated, at least on many measures. The Representation and Determinable Structure of Quantum Properties 01:15PM - 03:15PM
Presented by :
Samuel Fletcher, University Of Minnesota Orthodox quantum theory tells us that properties of quantum systems are represented by self-adjoint operators, and that two properties are incompatible just in case their respective operators do not commute. We present a puzzle for this orthodoxy, pinpointing the exact assumptions at play. Our solution to the puzzle specifically challenges the assumption that non-commuting operators represent incompatible properties. Instead, they represent incompatible levels of specification of determinates for a single determinable. This solution yields insight into the nature of so-called quantum indeterminacy and demonstrates a new and fruitful application of the determinable-determinate relation in quantum theory. Mereological Atomism’s Quantum Problems 01:15PM - 03:15PM
Presented by :
Ryan Miller, University Of Geneva The popular metaphysical view that concrete objects are grounded in their ultimate parts is often motivated by appeals to realist interpretations of contemporary physics. This paper argues that an examination of mainstream interpretations of quantum mechanics undercuts such atomist claims. First, mereological atomism is only plausible in conjunction with Bohmian mechanics. Second, on either an endurantist or perdurantist theory of time, atomism exacerbates Bohmianism’s existing tensions with serious Lorentz invariance in a way that undermines the realist appeal of both views. Bohmians should therefore resist atomism, leaving atomists somewhat physically homeless. | ||
03:15PM - 03:45PM Virtual Room | Coffee Break | ||
03:45PM - 05:45PM Forbes | Philosophy of physics: space and time Speakers
JB Manchak, UC Irvine
Caspar Jacobs, Junior Research Fellow, Merton College, University Of Oxford
James Weatherall, UC Irvine
Monica Solomon, Bilkent University
Moderators
Robert Rynasiewicz, Chair, Johns Hopkins University On the (In?)Stability of Spacetime Inextendibility 03:45PM - 05:45PM
Presented by :
JB Manchak, UC Irvine Leibnizian metaphysics underpins the near universally held view that spacetime must be inextendible – that it must be “as large as it can be” in a sense. But here we demonstrate a surprising fact within the context of general relativity: the property of inextendibility turns out to be “unstable” when attention is restricted to certain collections of “physically reasonable” spacetimes. Are Dynamic Shifts Dynamical Symmetries? 03:45PM - 05:45PM
Presented by :
Caspar Jacobs, Junior Research Fellow, Merton College, University Of Oxford Shifts are a well-known feature of the literature on spacetime symmetries. Recently, discussions have focused on so-called dynamic shifts, which by analogy with static and kinematic shifts enact arbitrary linear accelerations of all matter (as well as a change in the gravitational potential). But in mathematical formulations of these shifts, the analogy breaks down: while static and kinematic shift act on the matter field, the dynamic shift acts on spacetime structure instead. I formulate a different, `active' version of the dynamic shift which does act on matter. Where Does General Relativity Break Down? 03:45PM - 05:45PM
Presented by :
James Weatherall, UC Irvine It is widely accepted by physicists and philosophers of physics alike that there are certain contexts in which general relativity will "break down". In such cases, one expects to need some as-yet undiscovered successor theory. This paper will discuss certain pathologies of general relativity that might be taken to signal that the theory is breaking down, and consider how one might expect a successor theory to do better. The upshot will be an unconventional interpretation of the "Strong Cosmic Censorship Hypothesis". Newton’s Bucket Experiment: Fictional or Real? 03:45PM - 05:45PM
Presented by :
Monica Solomon, Bilkent University I explain that a target of Newton's example is the inadequacy of Descartes’s definition of motion. But I also a raise a serious problem for the current reading which comes from the attribution of “absolute and true circular motion” to the water revolving inside the bucket. The solution resides in an examination of Newton’s meticulous experimental setup as a self-contained, realistic description of how the quantity of true motion of a body of water changes I argue that the example should be read as real experiment and that it exemplifies a double methodological aspect. | ||
03:45PM - 05:45PM Sterlings 3 | Models and Modeling Speakers
Melissa Vergara-Fernandez, Erasmus University Rotterdam
Ina Jantgen, Speaker, University Of Cambridge
Kelle Dhein, University Of Kentucky
Gareth Fuller, University Of Kansas
Moderators
Aja Watkins, PhD Candidate, Boston University Eschew the heuristic-epistemic dichotomy to characterise models 03:45PM - 05:45PM
Presented by :
Melissa Vergara-Fernandez, Erasmus University Rotterdam It has been standard in the philosophy of models to distinguish between their having epistemic value and ‘mere’ heuristic value. This dichotomy has divided philosophers of economics: sceptics deny the epistemic value of theoretical economic models; optimists argue how-possibly explanations offered by models have epistemic value. I argue that the dichotomy has been historically contingent and, importantly, vis-a-vis- theories. We no longer distinguish theories and models so neatly. I further suggest that the optimists' urge to defend the epistemic value of models has often led them to mischaracterise economic practice. I illustrate with a case. How to measure effect sizes for rational decision-making 03:45PM - 05:45PM
Presented by :
Ina Jantgen, Speaker, University Of Cambridge Absolute and relative outcome measures measure a treatment’s effect size, purporting to inform treatment choices. I argue that absolute measures are at least as good as, if not better than, relative ones for informing rational decisions across choice scenarios. Specifically, this dominance of absolute measures holds for choices between a treatment and a control group treatment from a trial and for ones between treatments tested in different trials. This distinction has hitherto been neglected, just like the role of absolute and baseline risks in decision-making that my analysis reveals. Recognizing both aspects advances the discussion on reporting outcome measures. Multiscale Modeling in Neuroethology: The Significance of the Mesoscale 03:45PM - 05:45PM
Presented by :
Kelle Dhein, University Of Kentucky Recent accounts of multiscale modeling investigate ontic and epistemic constraints imposed by relations between component models at varying relative scales (macro, meso, micro). These accounts often focus especially on the role of the meso, or intermediate, relative scale in a multiscale model. We aid this effort by highlighting a novel role for mesoscale models: functioning as a focal point, and explanation, for disagreement between researchers who otherwise share theoretical commitments. We present a case study in multiscale modeling of insect behavior to illustrate, arguing that the cognitive map debate in neuroethology research is best understood as a mesoscale disagreement. Robustness and Replication: Models, Experiments, and Confirmation 03:45PM - 05:45PM
Presented by :
Gareth Fuller, University Of Kansas Robustness analysis faces a confirmatory dilemma. Since all of the models in a robust set are idealized, and therefore false, the set provides no confirmation. However, if a model is de-idealized, there is no confirmatory role for robustness analysis. Against this dilemma, I draw an analogy between robustness analysis and experimental replication. Idealizations, though false, can play the role of controlled experimental conditions. Robustness, like replication, can be used to show that some means of control is not having an undue influence. I conclude by considering some concerns about this analogy regarding the ontological difference between models and experiments. | ||
03:45PM - 05:45PM Sterlings 1 | Causation and Causal Modeling Speakers
Jennifer McDonald, Columbia University
Trey Boone, Postdoctoral Associate, Duke University
David Kinney, Speaker, Princeton University
Brian Hanley, Merrimack College
Moderators
Nina Van Rooy, Duke University Apt Causal Models and the Relativity of Actual Causation 03:45PM - 05:45PM
Presented by :
Jennifer McDonald, Columbia University Causal models provide a promising framework for analyzing actual causation. Such analyses must include how a model should map onto the world. While universally endorsed that a model must be accurate – saying only true things – the implications of this aren’t explored. I argue that, surprisingly, accuracy is not had by a model tout court, but only relative to a space of possibilities. This discovery raises a problem for extant causal model theories and, indeed, for any theory of actual causation in terms of counterfactual or type-level causal dependence. I conclude with a view that resolves this problem. Causal Faithfulness in Biological Systems 03:45PM - 05:45PM
Presented by :
Trey Boone, Postdoctoral Associate, Duke University Philosophical discussions of causal faithfulness have been predominantly situated within the social sciences—the traditional domain of application of the causal modeling techniques it attends. Recently, there has been increasing interest in applying such techniques to uncover causal relationships in biological systems. In this paper, I consider the extent to which faithfulness is a reasonable assumption in biological contexts and the problems that may results from relying on techniques that assume it. This discussion illuminates not only issues that may arise in causal modeling in biology, but also issues more generally relevant to understanding causal complexity in biological systems. Causal History, Statistical Relevance, and Explanatory Power 03:45PM - 05:45PM
Presented by :
David Kinney, Speaker, Princeton University In discussions of the power of causal explanations, one often finds a commitment to two premises. The first is that, all else being equal, a causal explanation is powerful to the extent that it cites the full causal history of why the effect occurred. The second is that, all else being equal, causal explanations are powerful to the extent that the occurrence of a cause allows us to predict the occurrence of its effect. This article proves a representation theorem showing that there is a unique family of functions measuring a causal explanation's power that satisfies these two premises. Fast and Slow Causation: An Interventionist Account of Speed of Change 03:45PM - 05:45PM
Presented by :
Brian Hanley, Merrimack College This paper elucidates an important feature of type-level causal relationships that is critical for understanding why disasters occur in sociotechnical systems. Using an interventionist theory, the paper explicates a concept, causal delay, to characterize differences between how rapidly or slowly interventions can make a difference to their effects. The paper then uses this explication to illuminate aspects of causal reasoning in everyday and scientific cases involving speed of change. In particular, the paper shows how causal delay clarifies why some systems are more prone to disasters than others. The paper closes by analyzing critical tradeoffs in choices between interventions. | ||
03:45PM - 05:45PM Board Room | Laws and Symmetry Speakers
Tyler Hildebrand, Dalhousie University
Sebastian Murgueitio Ramirez, Assistant Professor, Purdue University
Shelly Yiran Shi, UCSD
David Baker, University Of Michigan
Moderators
Bixin Guo, University Of Pittsburgh The Ideology of Pragmatic Humeanism 03:45PM - 05:45PM
Presented by :
Tyler Hildebrand, Dalhousie University According to the Humean Best Systems Account, laws of nature are contingent generalizations in the best systematization of particular matters of fact. Recently, it has become popular to interpret the notion of a best system pragmatically. The best system is sensitive to our interests—that is, to our goals, abilities, and limitations. This account promises a metaphysically minimalistic analysis of laws that fits scientific practice. However, I argue that it is not as minimalistic as it might appear. The concepts of goals, abilities, and limitations that drive the analysis are modally-robust. This leads to a dilemma. Three Puzzles About Symmetries 03:45PM - 05:45PM
Presented by :
Sebastian Murgueitio Ramirez, Assistant Professor, Purdue University I will use the simple case of a harmonic oscillator to introduce and resolve three novel puzzles about physical symmetries. One puzzle is that the fact that boosts are symmetries of Newtonian mechanics is not particularly important for explaining why a spring inside a ship remains invariant under constant boosts of the ship. A second puzzle is that, in many cases, both the connection between symmetries and representation and the one between symmetries and observations seem trivial. And the third puzzle is that there are symmetries discussed by physicists where the connection between symmetries, representation and observations are broken. Are Symmetries Laws of Laws? 03:45PM - 05:45PM
Presented by :
Shelly Yiran Shi, UCSD It is commonly believed that symmetry principles explain conservation laws. Since conservation laws can be mathematically derived from symmetries and vice versa, the explanatory asymmetry deserves philosophical justification. Marc Lange (2007) claims that symmetries are meta-laws that govern and hence explain conservation laws. In this paper, I argue that we should not grant symmetry a higher modal status. I present counterexamples to demonstrate that symmetries are neither necessary nor sufficient for conservation laws. Some symmetries are explanatorily prior to laws but not in the way that Lange prescribed. They serve as an epistemic guide rather than a necessary requirement. The Epiphenomena Argument for Symmetry-to-Reality Inference 03:45PM - 05:45PM
Presented by :
David Baker, University Of Michigan A new argument is given for the thesis that only symmetry-invariant physical quantities are real. Non-invariant quantities are dynamically epiphenomenal in that they have no effect on the evolution of invariant quantities, and it is a signifcant theoretical vice to posit epiphenomenal quantities. | ||
03:45PM - 05:45PM Sterlings 2 | Probability and Confirmation Speakers
Michael Strevens, New York University
Gordon Belot, University Of Michigan
Hanti Lin, University Of California, Davis
Sven Neth, Speaker, University Of California, Berkeley
Moderators
Elaine LANDRY, UC Davis Theory and Evidence: Hempel Was Right 03:45PM - 05:45PM
Presented by :
Michael Strevens, New York University In 1945, Carl Hempel proposed a simple theory of confirmation that eventually came to be seen as unacceptably unsophisticated: it failed to incorporate the impact of epistemic context, of the "superempirical virtues" such as simplicity and explanatory elegance, and it was purely qualitative, determining when a piece of evidence supported a hypothesis but not by how much. I propose that Hempel's theory, precisely because it has these properties, comes much closer to capturing the handling of evidential support in the official channels of scientific communication -- in the journals -- than is commonly supposed. I comment on the reasons for this. That Does Not Compute: David Lewis on Chance and Credence 03:45PM - 05:45PM
Presented by :
Gordon Belot, University Of Michigan My goal here is to explain why it is harder than one might expect to find a satisfying package that combining the Best System Account of chance and the Principal Principle. One can show that for a certain prima facie attractive version of the Best System Account of chance, the only priors that satisfy the Principal Principle are non-computable. So fans of the Lewisian package must either find a more suitable version of the Best System Account, weaken the Principal Principle, or maintain that rationality requires us to perform tasks beyond the capability of any Turing machine. The Trinity of Statistics 03:45PM - 05:45PM
Presented by :
Hanti Lin, University Of California, Davis There are major three schools of thought in statistics: frequentism, Bayesianism, and likelihoodism. They are often thought to be in fundamental disagreement, but I don't think so. My goal is to develop a simultaneous unification of the three camps, and defend it against the most urgent of the alleged conflicts, including especially Lindley's paradox and the actualism debate. Random Emeralds 03:45PM - 05:45PM
Presented by :
Sven Neth, Speaker, University Of California, Berkeley In a Bayesian framework, Goodman's `New Riddle of Induction' boils down to the choice of priors. I argue that if we assume random sampling, we should assign a low prior probability to all emeralds being grue. This is because random sampling and the observation-independence of green and blue imply that our prior should be *exchangeable* with respect to green and blue. | ||
03:45PM - 05:45PM Birmingham | Evolution Speakers
Marshall Abrams, Speaker, University Of Alabama At Birmingham
James DiFrisco, KU Leuven
Grant Ramsey, KU Leuven
Gianmaria Dani, PhD Student, KU Leuven
Ciprian Jeler, "Alexandru Ioan Cuza" University Of Iasi
Moderators
Jessica Pfeifer, UMBC Random foraging and perceived randomness 03:45PM - 05:45PM
Presented by :
Marshall Abrams, Speaker, University Of Alabama At Birmingham Research in evolutionary ecology on random foraging ignores the possibility that some random foraging is an adaptation not to environmental randomness, but to what Wimsatt called "perceived randomness". This occurs when environmental features are unpredictable, whether physically random or not. Mere perceived randomness may occur, for example, due to effects of climate change or certain kinds of static landscape variation. I argue that an important mathematical model concerning random foraging does not depend on randomness, despite contrary remarks by researchers. I also use computer simulations to illustrate the idea that random foraging is an adaptation to mere perceived randomness. Adaptationism and trait individuation 03:45PM - 05:45PM
Presented by :
James DiFrisco, KU Leuven
Grant Ramsey, KU Leuven Adaptationism is often taken to be the thesis that most traits are adaptations. In order to assess this thesis, it seems we must be able to establish either an exhaustive set of all traits or a representative sample of this set. Either task requires a more systematic and principled way of individuating traits than is currently available. Moreover, different criteria of trait individuation can make adaptationism turn out true or false, and criteria based on selection may presuppose adaptationism. In this paper, we show that adaptationism depends on trait individuation and that the latter is an open and unsolved problem. Explanatory Gaps in Evolutionary Biology 03:45PM - 05:45PM
Presented by :
Gianmaria Dani, PhD Student, KU Leuven Proponents of the extended evolutionary synthesis have argued that there are explanatory gaps in evolutionary biology that cannot be bridged by standard evolutionary theory. In this paper, we consider what sort of explanatory gaps they are referring to. We outline three possibilities, data-based gaps, frame-based gaps, and elusive gaps. We then examine the purported evolutionary gaps and attempt to classify them using this taxonomy. From there we reconsider the significance of the gaps and what they imply for the proposed need for an extended evolutionary synthesis. How should we distinguish between selectable and circumstantial traits? 03:45PM - 05:45PM
Presented by :
Ciprian Jeler, "Alexandru Ioan Cuza" University Of Iasi There is surprisingly little philosophical work on conceptually spelling out the difference between the traits on which natural selection may be said to act (e.g. “having an above average running speed”) and merely circumstantial traits (e.g. “happening to be in the path of a forest fire”). Here, I show that the two existing proposals as to how this distinction should be made are unconvincing because they rule out frequency-dependent selection. I then propose two new potential solutions, which share the idea that extrinsic properties dependent on internal relations should be accepted as traits on which natural selection can act. | ||
03:45PM - 05:45PM Duquesne | Scientific Progress and Policy Speakers
Parysa Mostajir, University Of Chicago
Helen Zhao, Columbia University
Kabir Bakshi, Student, Department Of History And Philosophy Of Science, University Of Pittsburgh
Chiara Lisciandra, Presenter, LMU Munich
Moderators
Clayton Houdeshell, University Of Kentucky Classical American Pragmatism as Anti-Scientism 03:45PM - 05:45PM
Presented by :
Parysa Mostajir, University Of Chicago Scientism has recently experienced a resurgence of interest in philosophy. One version of scientism often defended is ontological scientism—the view that any kind or property not mentioned in the theories of science has only a subordinate, secondary kind of reality. It is worth noting that a dominant tradition in the history of philosophy of science—classical American pragmatism—undertook decades of critical engagement with contemporaneous scientistic beliefs, many of which resemble those being debated at the present time. This anti-scientistic philosophy has multiple points of relevance for contemporary debates and defenses of ontological scientism. The Nature of Values in Science: What They Are and How They Guide 03:45PM - 05:45PM
Presented by :
Helen Zhao, Columbia University Philosophers of science tend to adjudicate debates about the value-free ideal by appealing to case-studies of value-laden science. Interpreting case-studies, however, faces a methodological challenge: measuring the causal impact of values where values interact with myriad causal factors. This challenge can be met, but not easily. Insofar as it is unmet, philosophers would do well to attend to other research questions. I propose we model values in science as goals as opposed to decision vectors. Rather than investigate proper reasons for scientific choices, we might focus on investigating proper goals of scientific inquiry. Scientific Progress and The Myth of the Constitution/Promotion Distinction 03:45PM - 05:45PM
Presented by :
Kabir Bakshi, Student, Department Of History And Philosophy Of Science, University Of Pittsburgh When does science progress? I argue that recently proffered accounts of scientific progress are untenable. In contemporary discussions, a distinction between a scientific episode constituting progress and promoting progress is made: An episode may promote scientific progress even though it does not constitute scientific progress. By paying attention to scientific practice, in particular to scientists’ appraisal of developments in techniques and methodologies, I show that the constitution/promotion distinction is problematic. This is bad news for the extant accounts since virtually all the accounts appeal to the constitution/promotion distinction. Are Article and Journal Metrics a Good Thing? 03:45PM - 05:45PM
Presented by :
Chiara Lisciandra, Presenter, LMU Munich How should universities evaluate scientific research? This paper critically assesses the quantitative approach to the evaluation of scientific outputs based on publication metrics. First, I provide an overview of the standard indicators, such as Impact Factor and h-index. Secondly, I show that one limitation of the metrics system is that it lacks adequate criteria to distinguish research fields that should be kept separate for evaluative purposes. Finally, I claim that this limitation negatively affects the use of such metrics. In particular, it risks to hinder the development of normal science in a Kuhnian sense in some of such fields. | ||
03:45PM - 05:45PM Fort Pitt | Inference and Concepts in Science Speakers
Alex LeBrun, University Of California, Santa Barbara
KEVIN DAVEY, University Of Chicago
Rose Novick, University Of Washington
Enno Fischer, Ruhr-University Bochum, Germany
Moderators
Jamie Shaw, University Of Toronto On Dispensability and Indispensability 03:45PM - 05:45PM
Presented by :
Alex LeBrun, University Of California, Santa Barbara Many philosophers present dispensability or indispensability arguments that presuppose a specific conception of dispensability. The present paper explores and critiques the reigning conception of dispensability. In particular, I argue that it entails that too many things are dispensable to our best scientific theories. This entailment is at odds with the purpose for which we seek a conception of dispensability. In light of my arguments, I present a positive proposal that radically shifts our understanding of how dispensability and indispensability arguments work. This new proposal demands a metaphysics of science that splits the difference between pure empiricism and pure rationalism. Bad News for Inference to the Best Explanation but Good News for the Epistemology of Science. 03:45PM - 05:45PM
Presented by :
KEVIN DAVEY, University Of Chicago I argue that thinking with good reason that a hypothesis $H$ is the best available explanation for some phenomenon does not entail that we are justified in believing $H$. Thus, inference to the best explanation does not in general give us justified belief. My argument is distinct from the so-called `bad lot' argument, revolving instead around the claim that the amount of evidence required for justifying belief in a hypothesis is typically greater than the amount of evidence required for making plausible that the hypotheses is the best of all available explanations. The Neutral Theory of Conceptual Complexity 03:45PM - 05:45PM
Presented by :
Rose Novick, University Of Washington Philosophical studies of complex scientific concepts are predominantly “adaptationist”, arguing that conceptual complexity serves important purposes. This is a historical artifact. Having had to defend their views against a monist presumption favoring simpler concepts, pluralists and patchwork theorists felt compelled to show that complexity can be beneficial. This has led to the neglect of an alternative possibility: that conceptual complexity is largely neutral, persisting simply because it does little harm. This paper defends the neutral theory of conceptual complexity in two ways: (a) as a plausible theory in its own right, and (b) as a useful foil for adaptationist arguments. Naturalness and the Forward-Looking Justification of Scientific Principles 03:45PM - 05:45PM
Presented by :
Enno Fischer, Ruhr-University Bochum, Germany It has been suggested that particle physics has reached the "dawn of the post-naturalness era." I provide an explanation of the current shift in particle physicists' attitude towards naturalness. I argue that the naturalness principle was perceived to be supported by the theories it has inspired. The potential coherence between major beyond the Standard Model (BSM) proposals and the naturalness principle led to an increasing degree of credibility of the principle among particle physicists. The absence of new physics at the Large Hadron Collider (LHC) has undermined the potential coherence and has led to the principle's loss of significance. | ||
03:45PM - 05:45PM Smithfield | Confirmation Speakers
Adrià Segarra, University Of Cambridge
Matthew Joss, University Of St Andrews
Barry Ward, University Of Arkansas, Fayetteville
Sofia Blanco Sequeiros, University Of Helsinki
Samuli Reijula, University Of Helsinki
Moderators
Jingyi Wu, University Of California, Irvine A Hybrid Understanding of Causal Inference in Comparative Group Studies 03:45PM - 05:45PM
Presented by :
Adrià Segarra, University Of Cambridge In this article I briefly introduce a Hybrid Theory of Induction (HTI) and illustrate the kind of work it can do for us. I do so by exploring a particular kind of causal inference in comparative group studies from the perspective of the HTI. I show how the HTI provides a useful common framework to understand ongoing debates, methodological guidance for practitioners and conceptual guidance in assessing the strength of our inductive inferences. A Ferocious Response to the Screening-off Thesis 03:45PM - 05:45PM
Presented by :
Matthew Joss, University Of St Andrews In this essay I examine Roche and Sober’s (R&S) thesis that explanation is evidentially irrelevant, clarify the nodal points of disagreement, and defend explanationism. To do this, I utilize William Lycan’s categories of explanationism (2002) and a distinction between per se explanatoriness and particular explanatoriness. These help show that even if there are cases where explanation identification does not raise the probability of a hypothesis, ferocious explanationism is not even in principle challenged by R&S. Further, R&S’ challenge to inference to best explanation proves too much and ultimately fails. Informational Virtues, Causal Inference, and Inference to the Best Explanation 03:45PM - 05:45PM
Presented by :
Barry Ward, University Of Arkansas, Fayetteville Frank Cabrera argues that informational explanatory virtues—specifically, mechanism, precision, and explanatory scope—cannot be confirmational virtues, since hypotheses that possess them must have a lower probability than less virtuous, entailed hypotheses. We argue against Cabrera’s characterization of confirmational virtue and for an alternative on which the informational virtues clearly are confirmational virtues. Our illustration of their confirmational virtuousness appeals to aspects of causal inference, suggesting that causal inference has a role for the explanatory virtues. We briefly explore this possibility, delineating a path from Mill’s method of agreement to Inference to the Best Explanation (IBE). Persistent evidential discordance 03:45PM - 05:45PM
Presented by :
Samuli Reijula, University Of Helsinki
Sofia Blanco Sequeiros, University Of Helsinki Replication of a finding is a sign – for some, the only sign – of scientific truth. Evidential discordance compromises truth, because discordance in scientific evidence means that a finding has not been reliably replicated. We distinguish between different types of evidential discordance, and single out persistent evidential discordance as a particularly serious problem for the epistemology of science. Building on Boyd’s (2018) notion of enriched lines of evidence, we propose a strategy for addressing persistent evidential discordance. | ||
03:45PM - 05:45PM Benedum | Philosophy of Psychology Speakers
Nina Atanasova, Lecturer, The University Of Toledo
Henry Taylor, Presenter, University Of Birmingham
Shivam Patel, Presenter , Florida State University
Devin Curry, West Virginia University
Moderators
Carrie Figdor, Session Chair, University Of Iowa The Surreality of Pain 03:45PM - 05:45PM
Presented by :
Nina Atanasova, Lecturer, The University Of Toledo I defend pain eliminativism against three recent challenges for its adequacy as a prediction of and a prescription for the fate of folk psychology in the face of mature neuroscience. While some challenges consist in showing that folk psychology is thriving in coexistence with advanced pain neuroscience, others claim that the term ‘pain’ has its utility for everyday purposes and thus should not be eliminated from commonsense vocabulary. I will show how the success of interventions for the treatment of chronic pain based on neuroscience education of chronic pain sufferers proves pain eliminiativism successful both descriptively and prescriptively. What is it like to be a baby? Natural kinds and infant consciousness. 03:45PM - 05:45PM
Presented by :
Henry Taylor, Presenter, University Of Birmingham Studying consciousness in prelinguistic infants presents a challenge. We cannot ask them what they saw, and they cannot understand complex task instructions. This paper offers an optimistic methodology for studying infant consciousness, by drawing on philosophical work concerning natural kinds. I argue that this methodology is scientifically realistic. I also use it to interpret recent neuroscientific results concerning conscious perception in infants. Cognitive Explanation, Simulation, and Delusion: Through the Lens of Anti-Realism about Thought Insertion 03:45PM - 05:45PM
Presented by :
Shivam Patel, Presenter , Florida State University Conflicting accounts of thought insertion share the assumption of realism: that the subject of thought insertion has a thought that corresponds to the description of her thought insertion episode. I argue against realism on the grounds that we should adopt a fictionalist, anti-realist interpretation of first-person thought insertion discourse. I then offer an anti-realist account of thought insertion, according to which sufferers merely simulate having a thought with certain properties. This alternative forces us to reconsider whether cognitive explanations of schizophrenia symptoms must be intelligible, and provides for a novel view of delusion. Scientific Psychology for Folk Craft 03:45PM - 05:45PM
Presented by :
Devin Curry, West Virginia University A comprehensive ontology of mind includes some mental phenomena that are neither (a) explanatorily fecund posits in any branch of cognitive science that aims to unveil the mechanistic structure of cognitive systems nor (b) ideal (nor even progressively closer to ideal) posits in any given folk psychological practice. Indeed, one major function of scientific psychology has been (and will be) to introduce just such (normatively sub-optimal but real) mental phenomena into folk psychological taxonomies. The development and public dissemination of IQ testing over the course of the 20th Century is a case in point. | ||
06:00PM - 07:45PM Kings 5 | PSA Awards & Presidential Address | ||
08:00PM - 09:00PM Kings 3, 4 | PSA Closing Reception |
Day 4, Nov 13, 2022 | |||
08:30AM - 12:00 Noon Kings Terrace | Nursing Room | ||
08:30AM - 12:00 Noon Kings Plaza | Childcare Room | ||
08:30AM - 12:00 Noon Kings Garden 1, 2 | Book Exhibit | ||
09:00AM - 11:45AM Fort Pitt | The New Demarcation Problem Speakers
Ty Branch, Presenting Author, Institut Jean Nicod
Bennett Holman, Presenter, Yonsei University
Janet Kourany, Presenting Author, Univeristy Of Notre Dame
Philip Kitcher, Presenting Author, Columbia University
Moderators
Seán Muller, University Of Johannesburg This symposium considers various aspects of "The New Demarcation Problem" (i.e., distinguishing between legitimate and illegitimate uses of values in science). Ty Branch and Heather Douglas argue that a successful solution must also renegotiate the scientific social contract. Bennett Holman explores what can (and cannot) be taken from the Popperian demarcation problem into debates about values. Janet Kourany makes the case that anti-racist values point the way to a more general standard to legitimate values in science. Philip Kitcher argues that to judge scientific decisions, we must understand how different motivations might contribute to or detract from human progress. The Scientist, qua Scientist, is an Ethical Agent 09:00AM - 11:45AM
Presented by :
Philip Kitcher, Presenting Author, Columbia University The sciences make progress through inquiries that address human problems. Many of those problems are practical, although some arise from detached curiosity. I think of this progress as pragmatic (Kitcher 2017): improving problematic situations, rather than aiming towards some goal (e.g. the fundamental laws of the universe). The identification of a problem depends on value judgments: a genuine problem is one in which valuable aims are blocked. As Heather Douglas and Torsten Wilholt have argued cogently (Douglas 2009, Wilholt 2009), value judgments play roles in individual scientific decisions (and in the social practices that coordinate such decisions). Some of those judgments are, as many defenders of the value-free ideal have recognized, open to devastating objections. Scientists who publish and campaign for conclusions, on the basis of skimpy evidence, moved by a desire to advance their careers, are rightly condemned; so too are collective decisions, motivated by the wish to advance some disputed cause. Thus, there arises “the new demarcation problem” (Holman & Wilholt, 2022). What values properly play a role, in setting the research agenda, in accepting and broadcasting alleged discoveries, and in instituting and refining the social structures in which the research of a scientific community is embedded? An obvious suggestion: actions in the practice of science are subject to ethical constraints – just as other human behavior is. To judge some change in scientific practice as replacing an ethically dubious value judgment by one that is endorsed across the human population justifies that change as progressive. After all, few people have qualms about viewing the diminution of cruelty to animals as justified. Yet science can also make progress through an inquiry into values, one that exposes the rationale for an ethical stance and broadens acceptance of it. That inquiry can strengthen the justification for value judgments deployed in the sciences. Further, ethical inquiry can also amend value judgments, so that introducing progressive ethical changes into scientific practice yields another justified judgment of scientific progress. When is an ethical change progressive? Ethical changes are justified when they respond to situations justifiably counted as problematic, in ways that are justifiably viewed as solving those problems (typically partially). Justification accrues from our best efforts to follow a procedure, involving deliberation among representatives of all those affected by the problem, employing the best available information, and striving for a solution all can accept (Kitcher 2021). Well-ordered science should be seen as an ideal, not in specifying a goal, but as a diagnostic tool for identifying and addressing problematic situations. Even though it is sometimes, perhaps often, absurd to think in terms of consensus, well-ordered science can diagnose progress in ethical inquiry. That’s enough. References Holman, Bennett and Torsten Wilholt 2022 “The New Demarcation Problem”, Studies in the History and Philosophy of Science, 91, 211-220 Douglas, Heather 2009 Science, Policy, and the Value-Free Ideal, Pittsburgh: University of Pittsburgh Press Kitcher, Philip 2017 “Social Progress”, Social Philosophy and Policy, 34 (2), 46-65 Kitcher, Philip 2021 Moral Progress, New York: Oxford University Press Wilholt, Torsten 2009 “Bias and Values in Science”, Studies in the History and Philosophy of Science, A, 40, 90-101 The New Demarcation Problem and its Relevance to Race 09:00AM - 11:45AM
Presented by :
Janet Kourany, Presenting Author, Univeristy Of Notre Dame The demarcation problem has been one of the most important problems in philosophy of science for centuries. Still, the problem has never been solved. The failures, in fact, have been so numerous and so diverse and they have gone on for so long that Larry Laudan issued a death warrant for the problem decades ago. Who would have thought, then, that the demarcation problem would arise again, but with none of its old challenges?! True, the new version is more modest than the old: it seeks only to distinguish legitimate from illegitimate value influences in science, not legitimate from illegitimate science. No matter. That modesty enables successful demarcations to be more readily provided. After all, modern science right from the start was billed as a way—indeed, the way—to improve the lot of humanity and make the world a better place (elsewhere I have called this “Bacon’s promise”). Which value influences are the legitimate ones should thus be easy enough to settle: they are simply those that actually yield this happy outcome. Of course, the devil is in the details. So, consider one of the most pressing societal issues of our time, the structural racism that continues to oppress minority groups in the U.S. and elsewhere. Focus especially on Blacks/African Americans in the U.S. In this case the sad truth is that science, far from making things better as it was supposed to do, has for centuries made things worse. The illegitimate values that helped to produce this result included, of course, the racist values that shaped so much of the social and biological research that was done. And to an alarming degree this research is still being done. But these illegitimate values also included the racist values that continue to shape so much of the important research that is not being done, the research that would be helpful to Blacks if it were done. This is the agnotological part of the story, and it includes, as well, the failure to encourage and support the Black scientists and would-be scientists who would most likely do that research. But what of the legitimate values, the values that would produce the flourishing of Blacks? This was, remember, the outcome of science that was supposed to occur. Such legitimate values clearly include anti-racist—egalitarian—values. For it is these values that motivate and shape the critiques and corrections of the past and present science that is harmful to Blacks. Still, these legitimate values have to include quite a bit more than anti-racist values. For, critiques and corrections, as crucial as they are, only help to control the damage to Blacks that racist science causes. They don’t go the extra distance to produce the flourishing of Blacks. What else is needed is a question Black scientists have been exploring. I will suggest that an analogous question arises within feminist science studies although it has not been recognized. This is just one of the interesting issues that the new demarcation problem brings to light. The Ecosystem of the VFI and its Role in the New Demarcation Problem 09:00AM - 11:45AM
Presented by :
Ty Branch, Presenting Author, Institut Jean Nicod The general consensus amongst philosophers is that values play an integral role in scientific inquiry. This requires a reorientation towards delineating legitimate from illegitimate values in science, or the new demarcation problem. Although it has been argued that alternatives to the value-free ideal (VFI) for science should at least serve the same purposes as the ideal given that some of its aims remain desirable, redrawing the boundaries between acceptable and inappropriate values does not address how the VFI is interwoven into institutional structures that support science. Viable VFI replacements will have to work within or reconstruct these pillars in order to succeed it. At the height of the VFI's influence (immediately after WWII) the VFI relied on the linear model for science funding. Public funds invested in basic (or pure) scientific research, with oversight and distribution by scientists, financed basic research in academic institutions. The findings from research would then be taken up by scientists working in privately funded labs who could produce public goods. Under this model, scientists interested in basic science could not be held accountable for the societal impact of their work. The VFI thrived because scientists were absolved from considering the social or ethical implications of their work, which separated science from society. Scientists were simultaneously called to advise policy-makers where they were able to occupy political spaces and maintain their independence by providing advice without the responsibility of making any decisions themselves. The science advisor’s role was to protect the integrity of science, respect for scientific knowledge and the institutions that housed it, rather than to consider the public impact of the decisions. Instead politicians were seen as morally responsible and accountable for the consequences of scientifically informed decisions. At the same time, an increasingly technical society was believed to impair the public’s ability to make informed decisions. Addressing this worry became the role of science communicators who helped to develop concepts like science literacy— or a familiarity with science thought to be desirable for the overall well-being of individuals and the state. Science literacy was cultivated through the deficit model which described the public as homogeneously ignorant of science and unable to engage it directly. Science understood as something central to civic life meant creating a foundation of scientific understanding. Even if science literacy could include more than scientific facts, in practice science literacy required information about science to be neatly encapsulated, evaluated and communicated. Science educators developed the consensus view, a focus on scientific information with the most agreement, and emphasised the empirical findings of science. As values remain a contentious area in science, they are not taught under the consensus view, strengthening the VFI. In sum, the linear model, independent science advisor, science communication and science education have reinforced the VFI and challenge progress towards dismantling and establishing alternative ideals for science. For any replacement to be successful, it will have to reimagine how science interfaces with these institutions or construct new institutions to support science altogether. Demarcation Problems Old and New 09:00AM - 11:45AM
Presented by :
Bennett Holman, Presenter, Yonsei University Parts of the politically conservative block in the United States have a long history of “science denialism”. As a means to explore the nature of the New Demarcation Problem (Holman and Wilholt, 2022) and its relation to the original Popperian demarcation problem this paper considers an example of each. The first is the movement to undermine the status of Darwinian evolution as the scientific explanation of human origins. The second is the “sound science” movement which has sought to challenge a significant amount of the science that undergirds environmental regulation. As means of explaining what is wrong with creationism, multiple philosophers have framed the issues in terms of the Popperian Demarcation Problem (e.g., Kitcher 1983). Drawing on such work, Sven Ove Hansson (2017) has compared and categorized both creationism and conservative critiques of environmental science as pseudoscience. However, while some of the rhetorical tactics are shared, I will argue that a closer analysis of the “sound science” movement (e.g., Milloy 2016) reveals—at least on some occasions—a rejection of science that is infused with and/or dependent on a set of values that he (and fellow conservatives) do not share (e.g. in commenting on his shaping of the Trump EPA policy Milloy averred: “I do have a bias. I’m all for the coal industry, the fossil fuel industry. Wealth is what makes people happy, not pristine air” quoted in Korman 2018). Ultimately, I argue Hansson is right only in part. Both debates share a number of features (e.g., a struggle over the cultural authority of science) and are rightfully both categorized as problems of demarcation. However, the nature of the debate is importantly different. Whereas creation science can be seen as an example of wishful thinking, in some environmental cases conservatives are rejecting science that fails to be “value-apt”, a rational response to science infused with unshared values (John 2019). This distinction illustrates an important difference between the new and old demarcation problems. Whereas the Popperian Demarcation Problem concerns what makes inquiry scientific, the New Demarcation Problem turns on an account of the proper role for science to play in a liberal democracy and what science must be like to be able to fulfill that role. Hansson, S. O. (2017). Science denial as a form of pseudoscience. Studies in History and Philosophy of Science Part A, 63, 39-47. Holman, B., & Wilholt, T. (2022). The new demarcation problem. Studies in History and Philosophy of Science, 91, 211-220. John, S. (2019). Science, truth and dictatorship: Wishful thinking or wishful speaking?. Studies in History and Philosophy of Science Part A, 78, 64-72. Kitcher, P. (1983). Abusing science: The case against creationism. MIT press. Kormann, C. (2018). Scott Pruitt's crusade against “secret science” could be disastrous for public health. The New Yorker, 26. Milloy, S. J. (2001). Junk science judo: Self-defense against health scares & scams. Cato Institute. Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge. Routledge. | ||
09:00AM - 11:45AM Benedum | Multiplicity, Data-Dredging, and Error Control Speakers
Deborah Mayo, Speaker, Virginia Tech
James Berger, Symposiast, Duke University
Conor Mayo-Wilson, Symposiast, University Of Washington
Clark Glymour, Symposiast, Carnegie Mellon
Suzanne Thornton, Symposiast, Swarthmore College
Moderators
Sander Beckers, Poster Presenter, Moderator, Question-asker, University Of Tübingen High powered methods, the big data revolution, and the crisis of replication in medicine and social sciences have prompted new reflections and debates in both statistics and philosophy about the role of traditional statistical methodology in current science. Experts do not agree on how to improve reliability, and these disagreements reflect philosophical battles–old and new– about the nature of inductive-statistical evidence and the roles of probability in statistical inference. We consider three central questions: •How should we cope with the fact that data-driven processes, multiplicity and selection effects can invalidate a method's control of error probabilities? •Can we use the same data to search non-experimental data for causal relationships and also to reliably test them? •Can a method's error probabilities both control a method's performance as well as give a relevant epistemological assessment of what can be learned from data? As reforms to methodology are being debated, constructed or (in some cases) abandoned, the time is ripe to bring the perspectives of philosophers of science (Glymour, Mayo, Mayo-Wilson) and statisticians (Berger, Thornton) to reflect on these questions. Error Control and Severity 09:00AM - 11:45AM
Presented by :
Deborah Mayo, Speaker, Virginia Tech I put forward a general principle for evidence: an error-prone claim C is warranted to the extent it has been subjected to, and passes, an analysis that very probably would have found evidence of flaws in C just if they are present. This probability is the severity with which C has passed the test. When a test’s error probabilities quantify the capacity of tests to probe errors in C, I argue, they can be used to assess what has been learned from the data about C. A claim can be probable or even known to be true, yet poorly probed by the data and model at hand. The severe testing account leads to a reformulation of statistical significance tests: Moving away from a binary interpretation, we test several discrepancies from any reference hypothesis and report those well or poorly warranted. A probative test will generally involve combining several subsidiary tests, deliberately designed to unearth different flaws. The approach relates to confidence interval estimation, but, like confidence distributions (CD) (Thornton), a series of different confidence levels is considered. A 95% confidence interval method, say using the mean M of a random sample to estimate the population mean μ of a Normal distribution, will cover the true, but unknown, value of μ 95% of the time in a hypothetical series of applications. However, we cannot take .95 as the probability that a particular interval estimate (a ≤ μ ≤ b) is correct—at least not without a prior probability to μ. In the severity interpretation I propose, we can nevertheless give an inferential construal post-data, while still regarding μ as fixed. For example, there is good evidence μ ≥ a (the lower estimation limit) because if μ < a, then with high probability .95 (or .975 if viewed as one-sided) we would have observed a smaller value of M than we did. Likewise for inferring μ ≤ b. To understand a method’s capability to probe flaws in the case at hand, we cannot just consider the observed data, unlike in strict Bayesian accounts. We need to consider what the method would have inferred if other data had been observed. For each point μ’ in the interval, we assess how severely the claim μ > μ’ has been probed. I apply the severity account to the problems discussed by earlier speakers in our session. The problem with multiple testing (and selective reporting) when attempting to distinguish genuine effects from noise, is not merely that it would, if regularly applied, lead to inferences that were often wrong. Rather, it renders the method incapable, or practically so, of probing the relevant mistaken inference in the case at hand. In other cases, by contrast, (e.g., DNA matching) the searching can increase the test’s probative capacity. In this way the severe testing account can explain competing intuitions about multiplicity and data-dredging, while blocking inferences based on problematic data-dredging. The Duality of Parameters and the Duality of Probability 09:00AM - 11:45AM
Presented by :
Suzanne Thornton, Symposiast, Swarthmore College Under any inferential paradigm, statistical inference is connected to the logic of probability. Well-known debates among these various paradigms emerge from conflicting views on the notion of probability. One dominant view understands the logic of probability as a representation of variability (frequentism), and another prominent view understands probability as a measurement of belief (Bayesianism). The first camp generally describes model parameters as fixed values, whereas the second camp views parameters as random. Just as calibration (Reid and Cox 2015, “On Some Principles of Statistical Inference,” International Statistical Review 83(2), 293-308)--the behavior of a procedure under hypothetical repetition--bypasses the need for different versions of probability, I propose that an inferential approach based on confidence distributions (CD), which I will explain, bypasses the analogous conflicting perspectives on parameters. Frequentist inference is connected to the logic of probability through the notion of empirical randomness. Sample estimates are useful only insofar as one has a sense of the extent to which the estimator may vary from one random sample to another. The bounds of a confidence interval are thus particular observations of a random variable, where the randomness is inherited by the random sampling of the data. For example, 95% confidence intervals for parameter θ can be calculated for any random sample from a Normal N(θ, 1) distribution. With repeated sampling, approximately 95% of these intervals are guaranteed to yield an interval covering the fixed value of θ. Bayesian inference produces a probability distribution for the different values of a particular parameter. However, the quality of this distribution is difficult to assess without invoking an appeal to the notion of repeated performance. For data observed from a N(θ, 1) distribution to generate a credible interval for θ requires an assumption about the plausibility of different possible values of θ, that is, one must assume a prior. However, depending on the context - is θ the recovery time for a newly created drug? or is θ the recovery time for a new version of an older drug? - there may or may not be an informed choice for the prior. Without appealing to the long-run performance of the interval, how is one to judge a 95% credible interval [a, b] versus another 95% interval [a', b'] based on the same data but a different prior? In contrast to a posterior distribution, a CD is not a probabilistic statement about the parameter, rather it is a data-dependent estimate for a fixed parameter for which a particular behavioral property holds. The Normal distribution itself, centered around the observed average of the data (e.g. average recovery times), can be a CD for θ. It can give any level of confidence. Such estimators can be derived through Bayesian or frequentist inductive procedures, and any CD, regardless of how it is obtained, guarantees performance of the estimator under replication for a fixed target, while simultaneously producing a random estimate for the possible values of θ. Good Data Dredging 09:00AM - 11:45AM
Presented by :
Clark Glymour, Symposiast, Carnegie Mellon "Data dredging"--searching non experimental data for causal and other relationships and taking that same data to be evidence for those relationships--was historically common in the natural sciences--the works of Kepler, Cannizzaro and Mendeleev are examples. Nowadays, "data dredging"--using data to bring hypotheses into consideration and regarding that same data as evidence bearing on their truth or falsity--is widely denounced by both philosophical and statistical methodologists. Notwithstanding, "data dredging" is routinely practiced in the human sciences using "traditional" methods--various forms of regression for example. The main thesis of my talk is that, in the spirit and letter of Mayo's and Spanos’ notion of severe testing, modern computational algorithms that search data for causal relations severely test their resulting models in the process of "constructing" them. My claim is that in many investigations, principled computerized search is invaluable for reliable, generalizable, informative, scientific inquiry. The possible failures of traditional search methods for causal relations, multiple regression for example, are easily demonstrated by simulation in cases where even the earliest consistent graphical model search algorithms succeed. In real scientific cases in which the number of variables is large in comparison to the sample size, principled search algorithms can be indispensable. I illustrate the first claim with a simple linear model, and the second claim with an application of the oldest correct graphical model search, the PC algorithm, to genomic data followed by experimental tests of the search results. The latter example, due to Steckhoven et al. ("Causal Stability Ranking," Bioinformatics, 28 (21), 2819-2823) involves identification of (some of the) genes responsible for bolting in A. thaliana from among more than 19,000 coding genes using as data the gene expressions and time to bolting from only 47 plants. I will also discuss Fast Causal Inference (FCI) which gives asymptotically correct results even in the presence of confounders. These and other examples raise a number of issues about using multiple hypothesis tests in strategies for severe testing, notably, the interpretation of standard errors and confidence levels as error probabilities when the structures assumed in parameter estimation are uncertain. Commonly used regression methods, I will argue, are bad data dredging methods that do not severely, or appropriately, test their results. I argue that various traditional and proposed methodological norms, including pre-specification of experimental outcomes and error probabilities for regression estimates of causal effects, are unnecessary or illusory in application. Statistics wants a number, or at least an interval, to express a normative virtue, the value of data as evidence for a hypothesis, how well the data pushes us toward the true or away from the false. Good when you can get it, but there are many circumstances where you have evidence but there is no number or interval to express it other than phony numbers with no logical connection with truth guidance. Kepler, Darwin, Cannizarro, Mendeleev had no such numbers, but they severely tested their claims by combining data dredging with severe testing. Bamboozled by Bonferroni 09:00AM - 11:45AM
Presented by :
Conor Mayo-Wilson, Symposiast, University Of Washington When many statistical hypotheses are tested simultaneously (e.g., when searching for genes associated with a disease), some statisticians recommend “correcting” classical hypothesis tests to avoid inflation of the false positive rate. I defend three theses. First, such “corrections” have no plausible evidential interpretation. Second, examples motivating the use of correction factors often encourage readers to conflate (a) conditional independence of the data given the hypotheses/parameters, with (b) unconditional independence of the hypotheses/parameters. Finally, correction factors are better construed as decision-theoretic devices that reflect the experimenter's (or the discipline's) value judgments concerning the conditions under which, after a round of testing, a hypothesis should be pursued/researched further. The standard argument that one should correct for multiple tests goes as follows. When many hypotheses are tested at a fixed significance level (e.g., 5%), there is a high chance that at least one hypothesis will be rejected, even if all hypotheses are true. Thus, a single significant result is not evidence that at least one of the hypotheses is false. Nor is the rejection of a specific hypothesis H *evidence* against H; instead, we should lower the significance level to reduce the chance of false positives. That argument, I claim, requires one to abandon at least one of two axioms about evidence: Axiom 1: If one has evidence for a hypothesis H and one deduces a trivial logical consequence H' from H, then one has evidence for H'. Axiom 2 (No evidential loss on ancillary information): If one has evidence for H, then one's evidence for H cannot be weakened by observing data whose distribution would be the same, whether H is true or not. To illustrate the first axiom, suppose Phillip Morris' CEO has evidence that smoking causes lung cancer and deduces that smoking causes *some* harm. Then the CEO comes to have evidence that smoking causes some harm. To illustrate the second, if one has evidence that one's oven is currently 350F, then one cannot lose that evidence by learning that corn prices dropped in 1972: past corn prices do not vary with one's current oven temperature. The standard argument requires one to abandon one of those two axioms. For the probabilistic calculations underlying the standard argument do not depend on whether (i) the many hypotheses being tested are evidentially related or (ii) the tests are conducted at the same or at distinct times. Giving up either axiom would require us to radically revise the importance we attribute to statistical evidence in scientific and legal settings. Giving up Axiom 1 would entail that Phillip Morris could possess evidence that smoking causes lung cancer without having evidence that smoking causes harm; we would need separate criminal statutes for every type of malady that might be caused by drugs. Giving up Axiom 2 entails that Phillip Morris could weaken its evidence for the hypothesis that smoking causes lung cancer by conducting a sufficiently large number of other, irrelevant statistical tests. Controlling for Multiplicity in Science 09:00AM - 11:45AM
Presented by :
James Berger, Symposiast, Duke University A problem that is common to many sciences is that of having to deal with a multiplicity of statistical inferences. For instance, in GWAS (Genome Wide Association Studies), an experiment might consider 20 diseases and 100,000 genes, and conduct statistical tests of the 20x100,000=2,000,000 null hypotheses that a specific disease is associated with a specific gene. The issue is that selective reporting of only the ‘highly significant’ results could lead to many claimed disease/gene associations that turn out to be false, simply because of statistical randomness. In 2007, the seriousness of this problem was recognized in GWAS and extremely stringent standards were employed to resolve it. Indeed, it was recommended that tests for association should be conducted at an error probability of 5 x 10—7. Particle physicists similarly learned that a discovery would be reliably replicated only if the p-value of the relevant test was less than 5.7 x 10—7. This was because they had to account for a huge number of multiplicities in their analyses. Other sciences have continuing issues with multiplicity. In the Social Sciences, p-hacking and data dredging are common, which involve multiple analyses of data. Stopping rules in social sciences are often ignored, even though it has been known since 1933 that, if one keeps collecting data and computing the p-value, one is guaranteed to obtain a p-value less than 0.05 (or, indeed, any specified value), even if the null hypothesis is true. In medical studies that occur with strong oversight (e.g., by the FDA), control for multiplicity is mandated. There is also typically a large amount of replication, resulting in meta-analysis. But there are many situations where multiplicity is not handled well, such as subgroup analysis: one first tests for an overall treatment effect in the population; failing to find that, one tests for an effect among men or among women; failing to find that, one tests for an effect among old men or young men, or among old women or young women; …. I will argue that there is a single method that can address any such problems of multiplicity: Bayesian analysis, with the multiplicity being addressed through choice of prior probabilities of hypotheses. In GWAS, scientists assessed the chance of a disease/gene association to be 1/100,000, meaning that each null hypothesis of no association would be assigned a prior probability of 1-1/100,000. Only tests yielding p-values less than 5 x 10—7 would be able to overcome this strong initial belief in no association. In subgroup analysis, the set of possible subgroups under consideration can be expressed as a tree, with probabilities being assigned to differing branches of the tree to deal with the multiplicity. There are, of course, also frequentist error approaches (such as Bonferroni and FDR) for handling multiplicity of statistical inferences; indeed, these are much more familiar than the Bayesian approach. These are, however, targeted solutions for specific classes of problems and are not easily generalizable to new problems. | ||
09:00AM - 11:45AM Birmingham | Interpreting Theories of Modified Gravity Speakers
Helen Meskhidze, University Of California, Irvine
Mark Trodden, Presenting Author, University Of Pennsylvania
Patrick Duerr, Presenting Author, Hebrew University Of Jerusalem
Niels Martens, Presenting Author, University Of Bonn
Moderators
Jessica Gonzalez, Graduate Student, UC Irvine Alternatives to General Relativity (GR) are often superficially similar to GR itself, leading some physicists to take for granted that their shared structure has the same interpretation in all cases. However, following Brown (2005), several philosophers have argued that such superficial similarities between GR and modified theories of gravity do not necessarily indicate a commitment to shared background structures. This symposium proposes to investigate how such differences in interpretation arise in the case of various theories of modified gravity. To do so, we focus on identifying and interpreting the ontological commitments of theories of modified gravity. Some of the questions considered include: How are matter and geometry represented in each theory? (How) can theories of modified gravity be distinguished from theories with dark matter and/or dark energy? In addition to considering theories in isolation to understand their ontological commitments, we will also consider how similar terms operate across different theories of gravity and any relations that may exist between theories. Theoretical Challenges to Modifying Gravity in Cosmology 09:00AM - 11:45AM
Presented by :
Mark Trodden, Presenting Author, University Of Pennsylvania I will discuss attempts to modify Einstein’s theory of General Relativity to explain current observational puzzles in cosmology. Focusing on the late-time acceleration of the universe, I will discuss guiding principles in modifying General Relativity, the theoretical issues that arise, and the fundamental problem of distinguishing such approaches from dark energy in the context of a modern effective field theory approach. The Spacetime-Matter Distinction 09:00AM - 11:45AM
Presented by :
Niels Martens, Presenting Author, University Of Bonn The tradition of a strict conceptual dichotomy between space(time) and matter--all entities and structures in our universe are to be categorised and conceptualized as either spacetime or matter, never both, never neither--originates with Democritus’ atomism--everything in our universe is ultimately reducible to either atoms (matter) or void (space)–and has reigned supreme ever since Newton. The framework of Newtonian mechanics typically includes a collection of point particles (representing e.g. the planets) that obey an action-reaction principle, carry energy and have mass, as well as a static, immutable space, which was often thought of as the arena or theatre in which the play performed by the planets unfolds. This picturesque way of thinking about Newtonian space is echoed by the famous container metaphor according to which space is conceived of as a container for matter, i.e. the contained (Sklar, 1974). Although this strict conceptual dichotomy did make a lot of sense in the context of our pre-20th-century worldview, this paper contends that it is no longer tenable, and even a hindrance to further progress. More precisely, each of the main ingredients--General Relativity, inflation, dark matter and dark energy--of our highly-successful and well-established standard model of cosmology that was developed over the course of the 20th century puts pressure on the outdated Newtonian idea that the space(time) and matter concepts can and should be strictly distinguished. This paper focuses on 1) comparing dark matter to its modified gravity alternatives, as well as 2) comparing various models of dark energy, including a cosmological constant and modified gravity alternatives. Dark energy is typically referred to as the intrinsic energy of spacetime, but ‘carrying energy’ is also a paradigmatic property of matter--some models even associate a mass with dark energy. Then again, the simplest interpretation of dark energy as a cosmological constant suggests that it has to do with the nomological structure rather than the gravitational/spacetime structure or the matter content of the universe. This paper analyses the various senses in which dark matter and dark energy and the respective modified gravity alternatives suggest a breakdown of the traditional spacetime-matter dichotomy. It furthermore investigates the consequences of these breakdowns for the philosophical debate between substantivalism and relationalism about spacetime. To the extent that dark matter and dark energy are not pure spacetime or pure matter but a mixture of both, the container metaphor clearly makes no sense anymore--what would it even mean for these entities to be both the container and contained in itself--and hence the traditional substantivalist and relationalist positions do not straightforwardly apply to theories including these entities (contra Baker, 2005). Revisiting the Foundations of Teleparallel Gravity--Geometrisation, Gauge Structure, Conventionalism 09:00AM - 11:45AM
Presented by :
Patrick Duerr, Presenting Author, Hebrew University Of Jerusalem My talk will revisit the foundations of Teleparallel Gravity (TPG), an alternative theory of gravity, observationally indistinguishable from General Relativity (GR). In contrast to the latter, gravity in TPG isn’t conceptualised as a manifestation of spacetime curvature. Instead, TPG’s gravitational degrees of freedom appear to be encoded in a suitable--the so-called Weitzenbock--connection’s torsion (a salient feature of non-Riemannian geometries in virtue of which parallelograms formed by parallel-transported vectors do fail to close). In the first part of my talk I shall try to carefully reconstruct TPG’s conceptual structure and interpretation. For this, it will prove useful to articulate the sense in which TPG--but arguably not GR--can be said to be a gauge theory, akin to (yet not exactly the same as) classical Yang-Mills theories. This will clarify the status of TPG’s spacetime structure in particular. On one common view, TPG’s spacetime structure is that of a Weitzenbock spacetime: the spacetime’s structure is supposed to be that of a manifold, endowed with a Weitzenbock connection. Does this position do justice to TPG? A related, and often simultaneously endorsed, claim purports that whereas GR geometrises gravity, TPG does not: the latter is a force theory; that is, in TPG gravity remains a force. How do these two views go together? Does the failure to geometrise gravity in the manner exemplified by GR indeed imply, as seems to be typically assumed in the literature, that in TPG gravity is force? More fine-grained taxonomies of degrees of geometrisation render this questionable. Or should we--yet another position one finds in the literature--regard merely as a notational variant of GR, an alternative representation of the same theory, with all differences solely pertaining to means of mathematical form, not to physical content? In the second part of my talk, I shall critically examine conceptual advantages (both inherent ones and advantages over GR) with which TPG tends to be touted, such as its separation of gravity and inertia, or the fact that it admits of a well-defined status of a gravitational energy-stress tensor. While I urge that TPG be taken seriously, my analysis regarding its alleged superiority over GR will be largely deflationary. An exception is the coherence of principles that TPG achieves via its gauge theoretical structure. The third and final part of my talk will draw some broader philosophical lessons from my results. In particular, I shall draw attention to the relevance of TPG to the standing of conventionalism about (spacetime) geometry--a philosophical stance that, to my mind, deserves a place at the table of the Modified Gravity/Dark Energy debate A Classical Spacetime Model with Torsion 09:00AM - 11:45AM
Presented by :
Helen Meskhidze, University Of California, Irvine Comparisons of gravitational theories and the structures they posit have a long and fruitful history in the philosophy of physics literature. Studying the relation between General Relativity (GR) and Newton-Cartan theory (NCT), for example, has been a valuable means to deepen our understanding of the ontology and structures each theory posits. Similarly, investigation of the relation between GR and its modified gravity counterparts have been of recent interest (see, e.g., Knox (2011) for a comparison of GR and Teleparallel Gravity, a relativistic theory of gravity that allows for non-vanishing torsion, as well as Duerr (2021) for a comparison of GR and f(R) gravity, a theory that is arguable the most natural extension of GR). Here, I investigate 1) the relationship between NCT and a classical theory of gravity with possibly non-vanishing torsion and 2) the relation between such a classical theory of gravity and Teleparallel Gravity. I first develop a theory of Newtonian Gravity with possibly non-vanishing torsion. This is done by following the procedure Trautman Recovery Theorem---a theorem that, in the torsion-free context, establishes the relation between NCT and Newtonian Gravity. Here, by relaxing the conditions on the possible derivative operators to allow those with torsion, I recover a theory of gravity with possibly non-vanishing torsion from NCT. For the second part of the project, I consider the classical limit of Teleparallel Gravity, i.e., as the speed of light becomes unbounded. Knowing that GR reduces to NCT as the light cones are "opened up," I consider what the result of a similar procedure is on Teleparallel Gravity. The spacetime I recover is a classical spacetime with possibly non-vanishing torsion. Overall, this project is in the spirit of Read and Teh (2018). However, while they adopt the tetrad formulations of the theories and employ the teleparallel Bargmann–Eisenhart solution to show the reduction relation, I remain closer to typical formulations of GR and NCT. This methodology, I argue, allows for a more straightforward comparison of the theories. This project not only helps us better understand the concepts in gravitational theories, it also presses us to consider how theoretical terms operate within/amongst theories of gravity. In each theory, terms like "curvature'' and "force'' are mathematically and conceptually redefined. However, the limiting relations amongst the theories (established through, e.g., recovery or by taking the classical limit) trouble the idea that the terms operate only within the limited context of each theory. | ||
09:00AM - 11:45AM Sterlings 3 | Scientific Medicine Speakers
Jonathan Fuller, University Of Pittsburgh HPS
Somogy Varga, Speaker, Participant, Aarhus University
Sabina Leonelli, University Of Exeter
Sara Green, University Of Copenhagen
Moderators
Adrian Erasmus, University Of Alabama In this symposium, we philosophically investigate the nature of contemporary and historical versions of scientific medicine as well as visions for its future, drawing on historical, empirical and scientific perspectives. Scientific medicine is the main focus of research in philosophy of science and medicine, but the ways in which it is 'scientific' and the question of what sciences it derives its scientific character from (and how) are seldomly investigated by philosophers. In this symposium, we explore what makes scientific medicine a distinct (but disunified) historical tradition, the content of the unique understanding sought in scientific medicine, how data science is transforming scientific medicine's hierarchy of evidence, how research on organoids in precision medicine challenges scientific medicine's concepts of disease and evidence, and how philosophical work on scientific medicine squares with the reality experienced by practicing scientists and doctors. The vision of precision medicine in organoid and organ-on-chip research 09:00AM - 11:45AM
Presented by :
Sara Green, University Of Copenhagen Precision medicine is motivated by the insight that patients and their problems show great variability and ideally should be treated in way that accounts for the individual’s biology and context. Realizing this vision rests on the development of new model systems that can recapitulate the physiological context beyond genomics and account for relevant characteristics of individual patients. Organoids are 3D cell cultures derived from stem cells or dissociated primary tissue (e.g., a tumor), sometimes combined via microfluidics into a so-called organ-on-chip model. Organoids and organ-on-chip models are hoped to present new opportunities for direct translation from bench to bedside, by bridging the gap between in vitro models and specific in vivo targets. These models are in the scientific literature described as “miniature organs”, “diseases in a dish”, “patients-on-chips”, and are envisioned to lead to a new “one-patient paradigm” in medicine. The aim of this paper is to provide an empirically informed philosophical analysis of what is expressed in such concepts and visions for the future. Through a qualitative content analysis and ethnographic field work, we unpack what the “vision of precision” entails in organoid and organ-on-chip research and analyze the ontological and epistemic implications of different versions of this vision. We then examine an application that has already been implemented in some clinical contexts, namely the use of tumor organoids for patient-specific drug screening. By allowing for “real-time” testing targeted treatments on organoids developed from a specific patient’s cancer cells, these personalized models challenge traditional understandings of preclinical models and clinical trials. In exploring the potentials and challenges of the new model systems, we uncover underlying assumptions about what characterizes disease and constitutes evidence when the scope of preclinical models narrows down to specific patients. We show how epistemic uncertainties about translational inferences from bench to bedside relate to ontological uncertainties about how fine-grained disease categories should be understood. Moreover, we show how epistemic and ethical implications intersect when cancer patients become urgently dependent on ongoing laboratory research. Is Data Science Transforming Medicine? The Case of COVID-19 09:00AM - 11:45AM
Presented by :
Sabina Leonelli, University Of Exeter Data science, and related data infrastructures and analytic tools, are frequently invoked as a major factor underpinning contemporary transformations in medical research, diagnosis and treatment. This paper discusses whether and how this is happening, and what the implications may be for philosophical understandings of the production, assessment and use of medical evidence. To this aim I consider the role of data science in tackling COVID-related illness and hospitalization, focusing on four areas that have proved critical to the medical response to the pandemic: 1. The development of data technologies and infrastructures to monitor COVID patients, for instance by checking levels of oxygen saturation in the blood, and related efforts to determine the extent to which frequent patient assessments help prevent hospitalization and death; 2. The collection, linkage and analysis of patient data by doctors and other health professionals (both in and outside the hospitals) to ensure effective and prompt insights into the emerging symptoms and long-term effects of infections caused by different COVID variants; 3. The use of data extracted from social services and other non-medical sources to support predictive models of COVID transmission, thus informing public health and treatment guidelines; and 4. The significance of data availability for the development and testing of COVID vaccines, and particularly the ways in which existing data sharing mechanisms (such as genomic databases) were redeployed and greatly expanded to inform small scale, non-clinical studies in several locations around the world, while at the same time underpinning the set-up of large-scale clinical trials. From consideration of these areas, I argue that the data science had a transformative effect on medical research on COVID-19, leading to an acceleration of knowledge production and significant changes in the evaluation of what counts as reliable evidence. Such transformation originated not solely from the deployment of novel methods and instruments for computational data mining and modelling, but also – and perhaps most fundamentally – from the diversity and scope of the data sources considered as potential evidence for medical knowledge and interventions, and the related challenges to existing standards for how evidence is produced, circulated and validated. The evidential power accrued by data produced by medical doctors and frontline hospital staff became incontrovertible, providing ammunition to already existing critiques of the hierarchy of evidence entrenched within the evidence-based medicine (EBM) movement. The need to recognise and value data coming from patients and doctors, compounded by the imperative to act swiftly to tackle the pandemic emergency, provided a strong incentive to review the structure and temporalities of randomised controlled trials, the relation between RCT results and other data, and the ways in which data circulation and exchange is regulated and fostered. This resulting shifts in evidential standards are ongoing. What remains unchallenged – and if anything has been strengthened by reliance on data analytics - is the dependence of publicly funded medical research and services on pharmaceutical companies and other private enterprises focused on the health sector. Scientific understanding in medical research and clinical medicine 09:00AM - 11:45AM
Presented by :
Somogy Varga, Speaker, Participant, Aarhus University According to a view that is gaining traction in current philosophy of science, what best describes the aim of scientific inquiries is not truth or knowledge about some target phenomenon, but understanding, which is taken to be a distinct cognitive accomplishment. At the same time, scientific inquiry is thought to be characterized by a certain systematicity that sets it apart from everyday inquiries. This permits a gradual progression from prescientific (or nonscientific) to scientific inquiries and grants scientific inquires a higher degree of systematicity without rendering the commonsense counterpart unsystematic. In light of these two views, we may conceptualize scientific medicine as a systematic inquiry that aims at a particular type of (medical) understanding. Due to the significant diversity that characterizes scientific endeavors, we may expect that what qualifies as constituting proper understanding is to a certain degree context-sensitive and can take on different forms depending on the nature of the scientific field and the features of its subject matter. If so, then we have at least some initial reasons for thinking that understanding within the context of medicine might differ in various ways from understanding in physics or chemistry. A better comprehension of the nature of understanding in medicine merits sustained philosophical attention, and this talk is dedicated to clarifying this matter. The talk falls into three parts. The first part describes in more detail what it means to understand something, distinguishes types of understanding, and links a type of understanding (i.e., objectual understanding) to explanations. The second part proceeds to investigate what objectual understanding of a disease (i.e., biomedical understanding) requires by considering the case of scurvy from the history of medicine. The main hypothesis here is that grasping a correct mechanistic explanation of a condition is a necessary condition for biomedical understanding of that condition. The third part of the talk argues that biomedical understanding is necessary, but not sufficient for understanding in a clinical context (i.e., clinical understanding). The hypothesis is that clinical understanding combines biomedical understanding of a disease or pathological condition with a personal understanding of the patient with an illness. It will be shown that in many cases, clinical understanding necessitates adopting a particular second-personal stance and using cognitive resources in addition to those involved in biomedical understanding. The attempt to support this hypothesis will include revisiting the distinction between “understanding” and “explanation” familiar from debates concerning methodological principles in the humanities and social science. The New Modern Medicine: Demarcating Scientific Medicine 09:00AM - 11:45AM
Presented by :
Jonathan Fuller, University Of Pittsburgh HPS Few would deny that contemporary western medicine is scientific, but what exactly is implied by this claim? Recent work in philosophy of science has brought research on the demarcation problem to bear on this question and has argued that medicine is a science. Authors disagree on what demarcates scientific medicine as a science from pseudosciences like homeopathy, whether it is scientific medicine’s systematicity, its reliance on clinical trials, or something else. However, this framing misses out on the historical dimension of scientific medicine, which was emerging as the dominant medical tradition in the West around the turn of the twentieth century. Not only do several proposed demarcation criteria fail to capture this timing (systematic diagnostic classification came earlier, the boom in clinical trials came later), but they also fail to recognize that scientific medicine in the nineteenth century involved new and more intimate relationships between medicine and independent sciences. Rather than asking what makes contemporary western medicine a science (assuming it is indeed one), in this talk I ask what makes contemporary western medicine ‘scientific medicine’, what demarcates scientific medicine from non-scientific medicine. Probing the latter question reveals that scientific medicine is a shifting model of medicine in history and today. In brief, scientific medicine results from the integration of medical practice with particular medical sciences and it models itself after these sciences in various respects. Different medical sciences have vied for this role. In the late 1800s, laboratory sciences, especially physiology, biochemistry, and bacteriology, characterized scientific medicine, which was taken by experimental physiologist Claude Bernard to mean ‘experimental medicine’. In the late 1900s, epidemiology played a large role in reshaping scientific medicine, which was rebranded as ‘evidence-based medicine’, with ‘evidence’ standing in for epidemiological evidence. Today, molecular genetics and computer/data science promise to remake scientific medicine in their image under the labels of ‘precision medicine’ and ‘deep medicine’, respectively. Whether they succeed will depend on whether precision medicine or deep medicine involve merely using new knowledge and technologies towards pre-existing medical ends, or rather imply a more radical reimagining of core medical concepts and reasoning (e.g. personalized diagnosis and treatment that do away with diagnostic categories and population data, medical AI that replaces much of the cognitive work of the clinician). These historical and potential future shifts in scientific medicine are the source of important new philosophical and practical problems. For instance, the remaking of scientific medicine in the image of epidemiology by the late 1900s brought with it the problems of multifactorial etiology and medical risk through the application of epidemiological methods and concepts to medicine (namely, multivariate statistics and risk-based outcome measures, respectively). These developments are not captured by asking whether medicine is a science. They only come into focus when we recognize that scientific medicine derives its scientific character from other sciences and that over time different sciences – from experimental physiology to epidemiology – have competed to claim the mantle of ‘the science of medicine’. | ||
09:00AM - 11:45AM Sterlings 1 | New Perspectives on Biological Teleology: Conceptual Distinctions, Scientific Implications Speakers
Leonardo Bich, Ramón Y Cajal Senior Researcher, University Of The Basque Country
Alan Love, University Of Minnesota
Max Dresow, Postdoctoral Associate, University Of Minnesota
Daniel McShea, Duke University
Gunnar Babcock, Duke University
Sophia Connell, Speaker, Birkbeck College, University Of London
Laura Nuño De La Rosa, Complutense University Of Madrid
Gillian Barker, Speaker, University Of Pittsburgh
Moderators
Max Dresow, Postdoctoral Associate, University Of Minnesota Teleological reasoning is probably as old as any activity recognizable as biology, and from the beginning has been subject to diverse and contradictory interpretations. These interpretations have left a complicated legacy that continues to influence discussions of biological teleology to this day. They have also arguably impeded the study of purposive phenomena in fields ranging from genomics to developmental biology to global change ecology. Researchers across these and other fields routinely use language that imputes goal-oriented behavior or directionality to biological systems. However, while some regard this as a conceptual mistake, there is a growing recognition among biologists and philosophers that ostensibly teleological phenomena require new conceptual frameworks that translate into rigorous models and discriminating empirical tests. This symposium assembles contributions from a range of theoretical perspectives that are poised to advance this conversation in fruitful directions. A key aim is to highlight conceptual resources and distinctions with scientific payoffs across diverse areas of biological inquiry. Field goals: three points about how teleology is structured 09:00AM - 11:45AM
Presented by :
Gunnar Babcock, Duke University
Daniel McShea, Duke University Field theory offers a new account of how teleology and goal directed systems work. Under field theory, goal-directedness arises from fields that are external to, and envelope, the entities they direct (McShea 2012, 2016; Babcock and McShea 2021). A teleological entity immersed in a field behaves persistently and plastically, following a trajectory directed by the field. When the head of a heliotropic sunflower follows the sun from east to west throughout the day, it is immersed in a field composed of the sun’s rays. Without the sun’s rays, the head would cease to move since there would be no direction. Or consider an autonomous car that is guided to a waypoint by GPS satellite signals. The satellite infrastructure forms a field, and it is the field that guides the car to its destination regardless of where the car starts its trip or what obstacles it may encounter. One virtue of the theory is that it collapses the distinction between natural and artifactual goal-directed systems. In an earlier paper, we established that fields are external and physically describable. Here we explain more precisely what fields are, in a way that operationalizes them so they can be deployed in the sciences and elsewhere. First, we detail some of the features we take to be the hallmarks of fields. And we argue that fields are multiply realizable, not reducible to a single physical description. For example, there are no special physical properties to be found in the sun’s rays that are common to all fields. At the same time, solar radiation is a purely physical phenomenon. What makes the sun’s rays a field is their place in the goal-directed system that consists of the combination of the sunflowers and the sun. Second, field theory helps make sense of the controversial role that mechanisms play in biology in general, and in goal-directed systems in particular. Outside a goal directed system, a field and a mechanism might be interchangeable. However, within the context of a goal-directed system, fields and mechanisms are quite different. Fields guide, while mechanisms respond to guidance, and non-teleological objects do neither. Third, we develop a kind of test for the existence of fields, based on a hypothetical elimination process. For any entity showing teleological behavior, we consider if such behavior could be accounted for without positing the existence of a field. This is to say, we consider whether there could be any teleology absent the existence of spatially larger, physical structures which direct a contained object. We argue that while it is possible to imagine such systems, the teleological systems we find in the world always seem to employ fields. From an engineering perspective, fields seem to be all but essential for teleology. Purposiveness, organization and self-determination 09:00AM - 11:45AM
Presented by :
Leonardo Bich, Ramón Y Cajal Senior Researcher, University Of The Basque Country In this talk we will provide a philosophical account of purposiveness grounded in the organization of biological organisms. The core of the argument consists in establishing a connection between purposiveness and organization through the concept of self-determination. Our account relies and elaborates on the organicist tradition in philosophy and biology, which traces back to the work of Kant on self-organizing entities (1790), and crosses the 19th and 20th centuries with the contributions of authors such as Bernard (1865), Canguilhem (1965), Varela (Weber and Varela 2002), Rosen (1991) and Kauffman (2000). On this account, biological organisms are capable of actively responding to perturbations and maintaining themselves by exchanging matter and energy with their environment without being completely driven by external factors. This autonomy is achieved by realizing what is referred to as “organizational closure,” a network of mutually dependent constraints that are (1) continuously constructed by an organism and (2) functionally control the flow of matter and energy in far from equilibrium conditions. Accordingly, a biological organization that realizes closure determines itself in the sense that the effects of its own activity contribute to establishing and maintaining its conditions of existence. Several examples of how organisms actively exert control over their own conditions of existence through the coordinated activity of their functional constraints will be provided at different degrees of complexity. We will mention examples in which organisms select between different available courses of action on the basis of their needs and environmental conditions: from chemotaxis and envelope stress response in E. coli, to endocrine control in mammals. Inherited dispositions: an Aristotelian framework for relinking development and reproduction in evolution 09:00AM - 11:45AM
Presented by :
Laura Nuño De La Rosa, Complutense University Of Madrid
Sophia Connell, Speaker, Birkbeck College, University Of London The philosophical foundations of the Modern Evolutionary Synthesis were built in opposition to an allegedly essentialist and teleological view of nature going back to Aristotle (Sober 1980). Because essentialism and teleology were regarded as core hindrances for a science of evolution, neo-Darwinian approaches endorsed a view in which evolutionary directionality arose solely from the differential reproduction of individuals in populations (Mayr 1959). This view was in turn based on a strict separation between development and reproduction, and thus between developmental causality and evolutionary causality (Griesemer 2005). However, this separation has recently broken down across various research fields, including epigenetic theories of inheritance, niche construction theory, and evolutionary developmental biology. These critical developments in evolutionary theory have led to a revival of Aristotelianism among some philosophers of biology attempting to forge an alternative conceptual framework for developmental evolution (Austin 2016, Nuño de la Rosa 2010, Walsh 2006). In this talk, we argue that the concept of “inherited dispositions,” derived from our interpretation of Aristotle’s Generation of Animals (Connell 2016), can play a core role in this enterprise. First, we claim that the Aristotelian focus on organisms as bearers of inherited dispositions aligns with current claims about organisms as directing causes of development and evolution (Laland et al 2015). Second, we discuss the active role of the female body in the work of Aristotle (Connell 2020). On the one hand, we argue that the generative power attributed to female matter provides interesting resources to conceptualize the formative capacities of tissues and the importance of “material overlap” between generations (Griesemer 2000). On the other, we discuss the role of the female body in sexual reproduction, and argue that, in contrast with container views of pregnancy, Aristotle’s view fits with contemporary perspectives on developmental niches as directing, and not merely enabling, factors in reproduction (Nuño de la Rosa et al 2021). Finally, we survey Aristotle’s views on environmental variation in connection with teleological constraints. Although Aristotle has been accused of having a rigid idea of species that excludes many inherited features as “accidental,” we show that his teleological explanation of environmental variations is indeed amenable to current conceptualisations of developmental plasticity. To that end, we discuss Aristotle’s explanation of monsters (Connell 2018) as evidence of functional and developmental tendencies that resonates with recent work on teratologies in evo-devo (Alberch 1989). We conclude that an Aristotelian notion of inherited dispositions provides a bridge for integrating teleology in an understanding of evolution where development and reproduction are meaningfully relinked. Geofunctions: understanding purposes, norms, and agency at the global scale, and why it matters 09:00AM - 11:45AM
Presented by :
Gillian Barker, Speaker, University Of Pittsburgh In his landmark “State of the Planet” speech in 2020, UN Secretary-General Guterres said what many scientists believe: “The state of the planet is broken” (Guterres 2020). This language reflects a view of global processes as functional in the sense that components have roles to play in the working of the planet as a whole. For example, scientists describe the thermohaline circulation as a “conveyor belt” that moderates global temperatures (Broecker 2010); polar ice as a planetary “air conditioner” (Urban 2020); and rainforests as “biotic pumps” driving water cycles and atmospheric circulation (Pearce 2020). This perspective has significant normative and teleological elements, suggesting that the components should operate to contribute to the planet’s ability to sustain some goal-state. If they cannot, the planet is “broken.” This view raises obvious philosophical problems. Widely accepted assumptions about the place of teleological and normative concepts in science seem to bar these functional ascriptions from use in large-scale systems, restricting scientific work on global processes to a mechanistic idiom. Yet normative and teleological ideas are widespread in global environmental sciences. They appear mainly in metaphorical forms, including functional metaphors of artifactual design, such as the examples above or organicist metaphors likening the earth to an organism, agent, or community. The prominence of these metaphors in scientists’ research publications and public statements reflects a growing sense among some scientists that recent discoveries of ubiquitous interdependencies and feedback between geologic, thermodynamic, ecological and biological processes reveal the empirical inadequacy of the relatively simple mechanistic picture of the planet and its atmosphere that has guided modelers over the past few decades. Significant predictive failures—including the persistent underestimation of the pace of global-scale change—underscore the limits of this picture and the urgent need for conceptual models that fully capture the import of these discoveries. Some scientists have responded to this need by developing approaches that take a functional perspective (looking at Earth as an integrated functional system of some kind) or an agent perspective (examining the roles of responsive living systems in global processes). These approaches offer a multiplicity of key concepts, from “global ecosystem services” (Costanza et al., 2017) to “planetary boundaries” (Rockström et al., 2009); from “planetary health” (Whitmee et al., 2014) to “Nature-based solutions” (Seddon et al., 2020). Yet we lack a clear and consistent framework that clarifies and assesses how these various ways of thinking about global functions, norms, and goals respond to fundamental philosophical challenges, and shows how they can be integrated with one another—a “geofunctional” framework. Choices between perspectives on global change and stability have potentially consequential implications for scientific knowledge-making. Different perspectives offer different heuristics, which affect the models researchers develop and the evidence they deem relevant. The mechanistic, functional and agential perspectives focus attention on different aspects of the systems they are applied to. These perspectives are not always exclusive and can be complementary. The question is where each can be most illuminating, and how to combine them. To address these questions, a geofunctional framework is required. Mapping the teleological landscape: epistemic precision with scientific payoff 09:00AM - 11:45AM
Presented by :
Max Dresow, Postdoctoral Associate, University Of Minnesota
Alan Love, University Of Minnesota Over the past several decades, philosophical analyses have shown that reasoning about purposes in nature is epistemically respectable. However, less attention has been paid to the heterogeneity of aims and commitments that motivate inquiry into apparent purposiveness. Our aim in this paper is to map major contours of the landscape of biological teleology and show how the resulting epistemic precision yields payoffs for different lines of scientific inquiry. We begin with the distinction between intrinsic and extrinsic forms of teleology (Lennox 1992). Roughly, teleology is intrinsic if purposes arise from some set of properties or relations internal to a system (typically an organism). By contrast, teleology is extrinsic if purposes manifest as a consequence of properties or relations external to a system. The intrinsic/extrinsic distinction has traditionally marked different answers to an ontological question (“what is teleology?”). Yet we treat it as a springboard for making several epistemological observations. First, attributions of intrinsicality or extrinsicality presuppose a system-environment circumscription, but the criteria on which this is based are typically left unanalyzed. Second, they assume that systems are composed of parts that contribute to a characteristic activity or organizational pattern. However, these parts can themselves be treated as teleologically organized wholes in a nested fashion, complicating the analysis. Third, the timescale on which systems manifest purposiveness can be highly variable and is often implicitly keyed to what counts as a whole system, its parts, and relevant features of the environment. These distinctions impact how teleology is modeled and explained in living systems. For example, the boundaries between system and environment can be drawn differently depending on what question is in view (e.g., “what is the source of directionality underlying goaldirectedness?”; “what accounts for the properties of self-maintenance and autonomy in living systems?”). This, in turn, yields different perspectives on what counts as “intrinsic” versus “extrinsic.” Likewise, since research on teleology has multiple aims—prediction, characterization, explanation, and control—which part-whole relations are salient may vary (e.g., a part useful for prediction may be unhelpful in characterizing purposive behavior). More finegrained distinctions reveal additional contours of the epistemic landscape, such as whether purposiveness is explanans or explanandum. Different criteria of adequacy are associated with these aims, such as accounting for what makes living systems distinctive versus offering a unified account of goal-directedness in biology and culture. These differences lead researchers to assign different meanings to shared concepts (e.g., organization) and metaphors (e.g., design), and to adopt divergent modeling strategies, like abstracting away from the environment to model intrinsic dynamics (or vice versa). This complex possibility space implies that traditional controversies may reflect research communities with different priorities talking past one another. Teleology is a multi-faceted phenomenon that involves questions of adaptation, functionality, goal-directedness, agency, and organization. The epistemic precision derived from this mapping exercise thus has immediate payoff. We illustrate the final point by showing how our analysis illuminates modeling and explanation choices for two divergent accounts of biological purposiveness (McShea 2012; Mossio and Bich 2017). | ||
09:00AM - 11:45AM Sterlings 2 | Theory Construction Methodology in Psychology Speakers
Riet Van Bork, University Of Amsterdam
Freek Oude Maatman, Radboud University Nijmegen
Noah Van Dongen, Presenter, University Of Amsterdam
Denny Borsboom, University Of Amsterdam
Maximilian Maier, PhD Student, University College London
Moderators
Jan-Willem Romeijn, University Of Groningen When, in 2015, the replication crisis was identified in the field of psychology, many researchers took up the task of working on methodology and suggesting practices that would help improve the replicability of findings in psychology. More recently, it has been noted that many of the identified problems in psychology not only concern the collection of effects that are on shaky grounds, but also the theories that supposedly explain these effects. Many theories in psychology are narrative accounts of hypotheses that do not give clear predictions for empirical data. Because of the omnipresence of such weak theories and the problems that have been linked to it, psychology is said to be in a 'theory crisis'. In response to these problems, systematic methodologies for constructing and evaluating theories are currently being developed in several research groups in psychology. The literature on the theory crisis is one to which both psychological scientists and philosophers of science contribute. Our symposium furthers this collaboration by bringing together four people who work in a psychology department and three people who work in a philosophy department to talk about theory construction in order to help psychology move past the theory crisis. Comparing Theories with the Ising Model of Explanatory Coherence: Methodological Advances and Theoretical Considerations 09:00AM - 11:45AM
Presented by :
Maximilian Maier, PhD Student, University College London As Lewin (1943) already noted, “there is nothing as practical as a good theory”. However, how do we determine which theories are good and which are bad? It is hard to improve theory quality without a tool to assess it in practice. In psychology, most subfields are characterized by weak theories or a complete lack of theories. Even though problems of bad theory have been discussed with clockwork regularity, little progress has been made so far (e.g., Borsboom et al., 2021; Gigerenzer, 1991; Meehl, 1978). A potential reason is that the discipline lacks the tools to assess the quality of theories systematically. Therefore, we (Maier et al., 2021) proposed a computational model for theory evaluation. Specifically, we implement Thagard’s (1989) theory of explanatory coherence (TEC) in the Ising model. The Ising model, originally developed in statistical mechanics to describe the polarization of ferromagnetic materials (Ising, 1925), is a network model that has found broad application in psychological research. We showed that a) hypotheses provided by a scientific theory and phenomena explained by theories can be expressed by the nodes of the Ising model; b) empirical evidence for (against) the phenomena can be expressed by positive (negative) threshold on the phenomena; and c) explanatory and contradictory relations between hypotheses and phenomena can be expressed by positive and negative edges. The Ising Model of Explanatory Coherence (IMEC) incorporates the TEC principles of symmetry, explanation, data, priority, contradiction, and acceptability. Unlike previous implementations of TEC, IMEC allows researchers to evaluate individual theories and is available in an R package. Maier et al. (2021) showed that this simple computational meta-theory could successfully reproduce a variety of examples from the history of science. However, there is room for extension. In this talk, I will briefly introduce IMEC and demonstrate how it integrates considerations of explanatory breadth, refutation, simplicity, and downplaying potentially irrelevant evidence with respect to the hypotheses of the theory and other phenomena. In addition, I will demonstrate how to think through hypothetical scenarios and identify critical experiments using examples of theories in psychological science. Further, I will extend the methodology employed in Maier et al. (2021) by adding sensitivity analyses to IMEC. For instance, by examining the sensitivity of theory evaluations to variations of the edge weights between hypotheses and phenomena, it is possible to improve the robustness of applied theory comparison. However, considerations about the range of possible values under which sensitivity needs to be assessed are fundamentally intertwined with fundamental questions in the philosophy of science, such as the following: How can we quantify (the strength of) evidence (for a phenomenon)? To what extent is a theory supported (refuted) by making a correct (wrong) prediction or explanation? How can we determine the number of elemental propositions that a theory consists of? I hope this talk will spark a debate around these considerations and later allow me to incorporate them in the proposed sensitivity analysis. Productive Explanation 09:00AM - 11:45AM
Presented by :
Noah Van Dongen, Presenter, University Of Amsterdam In current practice, psychological explanations typically present a narrative in which a theory renders a putative empirical phenomenon intuitively likely. However, whether the theory actually implies the phenomenon in question is also left to this intuition. To design a test for such a theory, different experts iterate through possible experimental setups until they agree that a particular manipulation should show the effect. The fact that this crucial link has to be fleshed out by polling experts, reveals an Achilles’ heel in current psychological theories. Nobody ever had to ask Einstein what would happen to light in the famous eclipse that Eddington observed (Dyson et al., 1920), because Einstein’s opinion was irrelevant. The reason for this is that Einstein’s theory can be and is implemented in a formal model, which means that every competent researcher can check whether the theory does or does imply a given phenomenon. That such independent verification of theoretical implications is not possible in many cases in psychology has direct consequences for the evaluation of the evidence for and against theories. For example, Vohs et al. (2021) suggest that the empirical phenomena associated with the theory of ego-depletion are not robust, as the experimental tasks used did not produce these phenomena. However, it is difficult to gauge whether or not this constitutes evidence against ego-depletion, because in the absence of an unambiguous formalization we cannot even be sure that the theory implies the anticipated phenomena. This points to an important desideratum for explanatory systems, namely that they should be (specific enough to be) encoded in a formal system (e.g., a set of mathematical equations, logical formalisms, or model simulations). We contribute to this task by proposing an account of productive explanation, in which the theory specifies a formal model that produces statistical patterns that reflect empirical phenomena that are purportedly explained by the theory. Expressing the theory in a formal model, and showing how that formal model produces patterns in data, brings transparency to the relation between the theory and the empirical phenomenon. To achieve this aim, we combine insights taken from recent discussions on theory construction (Borsboom et al., 2021; van Rooij and Baggio, 2021) and philosophical considerations (e.g., Cummins, 2000; Haig, 2005) with existing approaches to indirect inference in system dynamics (Haslbeck et al., 2021; Hosseinichimeh et al., 2016) to arrive at a workable methodology for establishing empirical implications. This productive explanation methodology involves a) translating a verbal theory into a set of model equations, b) representing empirical phenomena as statistical patterns in putative data, c) assessing whether the formal model actually produces the targeted phenomenon. In addition, we explicate a number of important criteria for evaluating the goodness of this explanatory relation between theory and empirical phenomenon. Theory Construction Methodology as a Third Way Between Exploratory and Confirmatory Data Analysis in Psychological Science 09:00AM - 11:45AM
Presented by :
Denny Borsboom, University Of Amsterdam Standard methodological and statistical texts divide research methodology into two strictly separated categories: confirmatory and exploratory research. In some fields, like scientific psychology, almost all research reports are written up as if they are confirmatory, i.e., involve rigorous tests of an antecedent theory. In reality, however, typical research in psychology involves an iterative procedure in which theory is adapted in view of the data, and new data are gathered to further investigate the adequacy of these theory changes. This has led several critics to lament the exploratory aspects of psychological research, as it leads to the possibility of hypothesizing after the facts are known: HARKing. HARKing is a problem because it involves generating hypotheses post hoc, while presenting these as prior to the research project. Thus, it presents research that is exploratory as if it is confirmatory. This practice has been argued to generate an excess of false positive findings, and as such is suspected to lie at the basis of the replication crisis in psychology. Accordingly, in response to that crisis, there has been a rapid surge in the development of methodological tools designed to make the theory testing process more rigorous: from preregistration to blinded data analysis and from many-labs paradigms to reproducibility projects. I argue that this response puts the horse behind the cart, because most psychological research should not be characterized as confirmatory or exploratory, but as aimed at theory construction. This diagnosis has direct implications for the organization of psychological science and the methodological education of psychologists. First, I will argue that even though theory construction has a creative dimension, there is also much logic to the process; as such, the process can be systematized and structured in the same way that we systematize other research processes. This invites the development of techniques and tools that can be used to support theory formation; an example is our recently introduced theory construction methodology, which is a structured series of steps that can be followed to develop theory. Second, theory construction is not covered by standard research methodology and is not taught in psychology curricula. Instead, it is treated as an almost mystical process by which a researcher is supposed to conjure theories out of thin air. However, I argue that theory construction is a skill like any other, and it should be practiced and taught. Third, reports of theory construction research do not fit current reporting standards in scientific publishing, which are almost entirely structured to present either empirical discoveries or tests of scientific theories. Thus, we need new reporting formats to allow such research to be reported truthfully. I will argue that, together, these elements of theory construction define a methodological agenda that has the potential to significantly advance psychological science. Why Theory Construction Must Include Ontological Commitment 09:00AM - 11:45AM
Presented by :
Freek Oude Maatman, Radboud University Nijmegen Current approaches to resolving psychology’s theoretical problems converge in their call for the further formalization of psychological theory (e.g., Fried, 2020; Van Rooij & Baggio, 2021; Borsboom et al., 2021; Robinaugh et al., 2021; Guest & Martin, 2021). In contrast, we have argued that psychology’s theoretical problems are in large part caused by issues independent from whether theories are represented verbally, formally or mathematically (e.g., Eronen & Bringmann, 2021; Oude Maatman, 2021). In this talk, we focus on the most fundamental of these issues, which has received little attention in the debate so far: that psychological theory generally is ontologically unspecific. More specifically, it often remains unclear how the constructs and processes described in psychological theories could be realized in the world, or even what the referents of key theoretical concepts are – despite their being treated as real causes. Concepts and constructs are often defined either operationally (e.g., intelligence; ego depletion; Lurquin & Miyake, 2017), functionally (e.g., creativity; Runco & Jaeger, 2012; implicit attitudes; Greenwald & Banaji, 1995; Greenwald & Lai, 2020) or by simply adopting their lexical, folk psychological definition (e.g., in emotion research; Fiske, 2020). Furthermore, psychological theorizing is often completely independent from any foundational theory of the nature of human cognition or approach to the mind-body problem, instead consisting of positing folk-psychologically intuitive causal relationships and mechanisms with few further constraints (see also Danziger, 1997). It thus often remains unclear what the exact ontological commitments of psychological theories are in terms of what particular entities or processes they posit to exist, how hypothesized causal relationships among them are assumed to be realized, or how they fit into a scientific picture of human cognition as a whole. In our talk, we show that this lack of ontological commitment is highly problematic for scientific practice in psychology (cf. Hochstein, 2019). Without a clear ontology, it becomes impossible to delimit the set of causally relevant variables for any to be explained process or phenomenon. Yet, without such delimitation one cannot determine under which conditions an effect or phenomenon should occur or not, which heavily complicates the design of experiments, the interpretation of (non-)replications, and any claims about the generalizability of experimentally identified effects (e.g., Cartwright, 2009). Such delimitation is also necessary to create theory-derived models for prediction or causal inference; if relevant causes are not included, these after all would fail. Without a clear ontology and well-delineated referents for concepts, one also cannot argue or determine whether psychological interventions only affect the intended concept (i.e., the problem of fat-handedness; Eronen, 2020) or that conceptually similar experiments or measurement techniques indeed tap into the same phenomenon or construct. Despite its potential benefits, formalization cannot resolve these issues; the only solution lies in conceptual work, in the form of creating or adopting an ontology. Given the broad scope of the aforementioned problems, we conclude that theory construction (method) in psychology needs to engage with ontology and ontological commitments if psychological science is to advance. Practical Philosophy for Psychology 09:00AM - 11:45AM
Presented by :
Riet Van Bork, University Of Amsterdam In this paper, we consult the philosophical literature to improve suboptimal practices in psychology. We discuss several practices in psychology that in our view hamper its scientific development. We then argue that these practices are rooted in certain methodological and philosophical commitments. We suggest that psychological science updates some of these commitments with more recent debates in the philosophy of science, in particular those on theory formation, epistemic iteration and the use of models. Our primary concern is with three research practices in psychology, which we term epistemic freezing, empirical myopia, and data fixation. We discuss these practices and identify a common thread of logical empiricism and hypothetico-deductivism among them. First, many concepts in psychology are operationalized by standardized instruments and then get stuck in their operationalization, resisting changes in theories about these concepts. For example, if one compares current intelligence tests to the original setup of e.g., Wechsler in 1955, or one compares the current version of the Beck Depression Inventory to the original from 1961, changes are only marginal and rarely informed by theoretical advances. We call this “epistemic freezing”, as opposed to “epistemic iteration” which refers to the idea that measurement and theories about the measured attribute iteratively improve each other (Chang, 2004). A possible explanation for this tendency to “freeze” concepts is that this would give psychological science a shared empirical basis. Second, it is common practice to stipulate hypotheses before collecting data, and then proceed with confirmatory testing of these hypotheses. Most of the research methodology concerns the testing of a given hypothesis and ignores the research part in which ideas are generated and theories are built, so that testable hypotheses can be formulated. We call this singular focus on hypothesis testing “empirical myopia”, because by focusing only on the testing part, psychology loses sight of the more speculative and exploratory process of theory construction. This practice clearly reflects the confirmatory practice of science along hypothetico-deductivist lines. Third, in psychology, observation has become almost synonymous to ‘data’. For example, in methodology textbooks, theories are said to predict and explain data, where explanation is more or less synonymous to accounting for variance of a dependent variable. In addition, for each new hypothesis to test, one should collect new data. We call this practice “data fixation”, as the focus is on explaining data as opposed to phenomena. Again, we recognize an empiricist streak: the basis for our claim to knowledge is to be found in observed data, and anything that moves us away from direct contact with these observations presumably weakens this basis. Summing up, our diagnosis is that many current practices in psychology are still committed to logical empiricism and hypothetico-deductivism. To help psychological science move away from its somewhat outdated philosophical and methodological commitments, we suggest that it re-evaluates the role of psychological theory. Next to the methodological norms that govern data handling and hypotheses testing, psychological science is in need of norms for the construction and use of theory. | ||
09:00AM - 11:45AM Forbes | Bringing Philosophy into the Nutrition Sciences Speakers
Paul Griffiths, Professor Of Philosophy, University Of Sydney
David Raubenheimer, Professor, University Of Sydney
Jonathan Sholl, Associate Professor, CNRS - University Of Bordeaux
Saana Jukola, Ruhr-University Bochum, Germany
Moderators
Nuhu Osman Attah, University Of Pittsburgh While philosophers have raised many interesting questions concerning the ethics, aesthetics, and politics of food, philosophy of science has paid little attention to the nutrition sciences. In this symposium we bring together philosophers and a scientist to explore conceptual and empirical challenges facing this largely unexplored domain. Our contributions will involve: 1) analyzing philosophical misunderstandings of evolutionary nutrition science concerning the role of adaptationist explanations; 2) offering a scientific perspective on conceptual problems facing nutrition research and one promising framework to address them; 3) providing a qualified defense of nutrient reductionism and the utility of nutrient-level explanations; and 4) exploring the epistemic and sociopolitical issues in personalized nutrition. Together, these unique contributions will help pave the way for a philosophy of the nutrition sciences. From population-level guidelines to individualized nutrition advice? – Epistemic and Sociopolitical implications of Personalized Nutrition 09:00AM - 11:45AM
Presented by :
Saana Jukola, Ruhr-University Bochum, Germany Nutrition science has traditionally relied on population-level evidence, especially evidence from observational studies. However, it is facing what could be called a ‘credibility crisis’ (Penders et al. 2017; Jukola 2021). Critics have questioned the reliability of the evidence originating from observational studies and demanded randomized controlled trials to back up claims about the link between food and health. Further, the aim of implementing effective public health interventions and providing individual guidance based on population-level evidence has been called into question (e.g., Ordovas et al. 2018). Recently, Personalized Nutrition (PN) has arisen as a challenger to the traditional population-based approach to nutritional evidence and advice. It aims at providing more effective interventions by utilizing genetic, nutritional, medical, etc. information. This talk addresses the question: What are the epistemic and sociopolitical implications of the trend towards PN? In order to provide answers, I start by drawing on Longino’s (2013) account of local epistemologies to outline the epistemic landscape of PN and to show how it differs from that of the so-called traditional nutrition science. Despite its recent proliferation, PN lacks a commonly agreed-upon definition (e.g., Bush et al. 2020). I suggest that there are multiple ways of conceptualizing PN and, consequently, of delineating what its central research questions and methods are. For example, there are differences in which physiological, genetic, or clinical parameters researchers focus on (e.g., Drabsch & Holzapfel 2019). Second, I assess the potential ethical and political implications of PN. By applying the so-called Coupled Ethical-Epistemic Analysis (Katikireddi & Valles 2015) as a tool for analysing the entanglement of epistemic and non-epistemic aspects of research, I hypothesize that at least some dominant conceptualizations of PN lead to effective public health interventions being undermined with undesirable consequences. This concerns views of PN that focus on genetic variation or epigenetic marks, while overlooking the effects of social and environmental factors, when differences between health outcomes are explained. PN may overemphasize the responsibility of at-risk individuals for their own health to the detriment of interventions targeting social determinants of health. What Philosophy and Nutritional Ecology can teach one another 09:00AM - 11:45AM
Presented by :
Paul Griffiths, Professor Of Philosophy, University Of Sydney
David Raubenheimer, Professor, University Of Sydney Philosophers have the impression that evolutionary medicine is plagued by naive adaptationism (e.g., Murphy 2005, Valles 2011, Méthot 2012), leading to poor science through the proliferation of untestable ‘just-so stories’ and to poor medicine through not considering alternative explanations with different medical implications. It is therefore predictable that, given the centrality of optimality modelling to contemporary nutritional ecology, these criticisms will be applied, mutatis mutandis. It is also predictable that because it utilises the concept of ‘mismatch’ between mechanisms governing dietary choice and nutritional environments, nutritional ecology will be criticised for invoking an ‘environment of evolutionary adaptedness’ (Buller 2005). Such criticisms are understandable, because naive adaptationist reasoning occurs in both ‘popular nutrition’ – self-help books and lifestyle gurus – and in traditional nutrition science. However, the solution for this is not less evolutionary thinking, but more, as nutritional ecology demonstrates. Valles (2011) argues that evolutionary medicine is committed to ‘empirical adaptationism’ (Godfrey Smith 2001), the view that forces other than natural selection can be neglected in the explanation of organismic form. One of Valles’ examples is the now exploded ‘promiscuous primate’ hypothesis about menstruation (Profet 1993). But the work which refuted that hypothesis, and which used adaptationist reasoning to establish constraints on the evolution of endometrial reabsorption, is equally part of evolutionary medicine (Strassman 1996). In a similar vein we show that the prominence of optimality analysis and related methods in nutritional ecology does not reflect on a commitment to strong empirical adaptationism. It reflects both ‘methodological adaptationism’, a powerful tool for revealing constraints on natural selection, and ‘explanatory adaptationism’ – an explanatory focus on the observed degree of adaptation. Nutritional ecology examines the interactions of animals with nutritional environments (Raubenheimer and Simpson 2012). It makes extensive use of the idea that nutritional regulatory phenotypes and their teleonomic goals (‘intake targets’) are 1) an important determinant of evolutionary fitness, 2) therefore reflect a history of natural selection and 3) thus contain information about the interplay of optimization and constraint in evolution. It is explicitly and importantly a multi-scale theory, which studies the adjustment of organisms to their nutritional environments on scales ranging from minutes (e.g., homeostasis) to lifetimes (e.g., phenotypic plasticity), across generations (epigenetic inheritance) and in evolutionary time (gene selection). Since the causal interaction of species with their environments is bidirectional, nutritional ecology also makes use of gene-culture coevolution and niche construction when explaining nutritional phenotypes. Nutritional ecologists have examined these issues extensively in laboratory and field studies. Understanding the actual ways in which nutritional ecology is ‘adaptationist’ will provide a safeguard against what we might call ‘naive anti-adaptationism’, the failure to appreciate the methodological sophistication with which researchers use optimality analysis and related methods. Our discussion is therefore an example of the idea at the heart of this symposium – that nutrition science is a rich and productive field for interaction between philosophy and science. Who’s afraid of nutritionism? 09:00AM - 11:45AM
Presented by :
Jonathan Sholl, Associate Professor, CNRS - University Of Bordeaux The central aim of the nutrition sciences is to understand how nutrition impacts health. One problem supposedly plaguing this endeavor is nutritionism—a ‘reductive’ focus on the role of nutrient composition or isolated nutrients (e.g., macronutrients or vitamins) for explaining a food’s effects on health (Scrinis 2008; 2013). Methodologically ‘reducing’ foods to nutrients can foster adversarial debates about the purported health effects of isolated nutrients, obscure the complexity of food-organism interactions, and distort how nutrients produce different outcomes in the context of foods, processing techniques, or dietary patterns. Anti-reductionist critiques, most of which claim that foods or dietary patterns should be the fundamental explanatory levels, permeate nutrition research (Messina et al. 2001; Zeisel et al. 2001; Hoffmann 2003; Jacobs and Tapsell 2008; Fardet and Rock 2014; 2018; 2020; Mayne, Playdon, and Rock 2016; Mozaffarian, Rosenberg, and Uauy 2018; Rees 2019; Moughan 2020; Campbell 2021). Moreover, the claim that nutritionism is the “dominant ideology” is entering philosophy (Siipi 2013; Borghini, Piras, and Serini 2021). Amidst calls to reform nutrition research (Ioannidis 2013; 2018; Mozaffarian and Forouhi 2018; Hall 2020), this presentation contributes by clarifying whether the problem raised by nutritionism is less about which level is fundamental (nutrients vs. foods), and more about which level(s) provide adequate explanations of what in nutrition impacts health. 1) For instance, are explanations that aim to link the nutrients in foods or dietary patterns to specific health outcomes inherently flawed? 2) Relatedly, can nutrient-level research elucidate causal mechanisms that are often obscured in highly complex food- or diet-based investigations? 3) Moreover, can nutrients explain variations in organismal feeding behaviors, explaining why organisms select their foods? While the complexity of food-health interactions requires more than ‘one nutrient at a time’ approaches (Simpson, Le Couteur, and Raubenheimer 2015), answering the above questions entails evaluating whether/how nutrient-level research can generate integrative explanatory frameworks (Brigandt 2010). First, I analyze claims that nutrient reductionism could be useful if it offers mechanistic explanations that combine explanatory levels (Machamer, Darden, and Craver 2000; Ströhle and Döring 2010). For instance, nutritional ecologists propose that nutrient ratio variations, e.g., protein to carbohydrates, are the common threads among foods, meals, and diets that provide robust explanations of distinct outcomes—from metabolic regulation and biological fitness to obesity (Raubenheimer and Simpson 2016; 2019; 2020)—and of organismal feeding behaviors: organisms select foods largely based on nutrient content (Simpson and Raubenheimer 2012). Can this form of nutrient-level research offer ‘appropriate’ levels of complexity to clarify how dietary constituents interact, which properties of foods or diets reliably affect health, and the mechanisms involved (Solon-Biet et al. 2019)? I further analyze this proposal by looking at how nutrients modulate cancer risk and development (Theodoratou et al. 2017; Zitvogel, Pietrocola, and Kroemer 2017; Altea‐Manzano et al. 2020; Kanarek, Petrova, and Sabatini 2020; Salvadori and Longo 2021). Overall, I offer an epistemological evaluation of nutrient-level research and its potentially integrative explanations of what in foods and dietary patterns affects health. | ||
09:00AM - 11:45AM Duquesne | Science Without Levels? Speakers
Joyce C. Havstad, University Of Utah
Carl Craver, Washington University
Angela Potochnik, University Of Cincinnati
Petri Ylikoski, University Of Helsinki
Moderators
Yifan Li, University Of North Carolina At Chapel Hill The concept of levels has been used broadly across the history of science and across diverse areas of contemporary science as a principle for organizing investigation and knowledge of the natural world. Invoking levels has also played key roles in philosophy of science work about scientific explanation, metaphysical (anti-)reductionism, intertheoretical reduction, the relations among fields of science, discovery and description in science, and physicalism. Recently, some philosophers question this "leveling" of science and nature, alleging that this imposed, artificial structure distorts our understanding of science and nature. In this symposium, we explore some arguments for eliminating the concept of levels from our thinking in different areas of science, and we consider whether and in what ways science and philosophy of science should do without the invocation of levels. This session includes philosophers of biology, chemistry, neuroscience, and social science so that we are well positioned to explore diverse notions of and context for 'levels.' We ask: (a) what purposes do invocations of levels serve in these different areas, (b) what are the problems with those invocations of levels, and (c) are there different ways of accomplishing these purposes that do not suffer from the same problems? What’s to Gain by Letting Go of Levels 09:00AM - 11:45AM
Presented by :
Angela Potochnik, University Of Cincinnati Levels-eliminativists, including this symposium’s participants, have raised a variety of criticisms about specific conceptions of levels as well as the broad use of levels as a metaphor or heuristic in science. The general question inspired by such criticisms and explored in this symposium is: then what? Craver (2014) has suggested it’s impractical to legislate metaphor use in science, including invocations of levels. And, in their talks, Craver and Havstad each outline visions for what should be preserved of levels concepts. In this talk, I explore the idea that scientists and philosophers should simply jettison appeals to levels in all but the most specific, narrow uses when appropriate, such as levels of buildings. An interesting fact about our world is that opportunities for such a use of the levels concept are strikingly infrequent. I begin by outlining some of the roles the levels concept plays in scientific discourse, focusing especially on biology. I then briefly survey criticisms of levels and outline how these criticisms are relevant to these uses of levels. This discussion is in part informed by Ylikoski’s preceding talk. I then motivate alternative concepts in place of levels in each of the uses I’d surveyed. The approach I take in this talk is to use the criticisms raised of levels to help inspire better replacements. ‘Levels’ is used variously to describe the world and to describe our investigations and representations of the world; any replacement must apply to phenomena or to scientific methods (not both). ‘Levels’ assumes regularity and well-orderedness of our world; any replacement should take seriously interconnection and variability. One primary conclusion of my talk is that no single concept is apt for all the distinct roles ‘levels’ plays; this is one source of the difficulties plaguing the concept. Thus, I cannot hope to motivate a unitary replacement concept in this talk. Instead, I conclude by surveying some alternative ways to structure biology textbooks—instead of the now-ubiquitous framing of levels of organization—to illustrate the opportunities for theoretical improvements created by letting go of levels. Surely the levels concept will always be an option for various theoretical uses in science and philosophy. My point here is that turning to levels eclipse other possibilities, some of which may well lead to theoretical or methodological improvements. Getting lost with levels: the sociological micro-macro problem 09:00AM - 11:45AM
Presented by :
Petri Ylikoski, University Of Helsinki The intuitive notion of level is often employed by social scientists and philosophers of social science to conceptualize tricky theoretical challenges. While it serves as an organizing metaphor for thinking, its assumptions and implications are never fully articulated. Consequently, unacknowledged and unhelpful conceptual commitments may be introduced to the debate. In this paper, I will show how this happens in the case of the sociological micro-macro problem. Sociology deals with phenomena in a wide variety of temporal and spatial scales, from an individual's cognitive and emotional processes to long-term changes in territorial societies. Connecting the data and theories of phenomena at these various scales is an important but thorny theoretical challenge. When combined with competing explanatory ambitions of different research traditions, fears (or fantasies) of reductionism, and general conceptual ambiguity, it is easy to understand why thinking in terms of levels has felt tempting to many. However, we should avoid the temptation for three sorts of reasons. First, conceptualizing the micro-macro problem in terms of levels misses crucial features of the problem. The levels mindset often abstracts away from the heterogeneity of micro and macro properties, making the discussion sterile and difficult to challenge. Similarly, it misses the contrastive nature of the micro-macro distinction: the social scientists use the distinction flexibly, and the same phenomenon can be either micro or macro depending on its contrasts. Finally, the focus on levels perspective makes it difficult to see that the relevant sociological questions are more about causation, dynamics, and history than about constitution or realization. Second, the levels conceptualization introduces assumptions that are both unnecessary and unhelpful. For example, philosophers often automatically assume that the levels in the social sciences are both comprehensive and unique. However, neither of these assumptions has ever been demonstrated. On the contrary, there exist substantial challenges to such presumptions. Even the less assuming question: "how many levels are there in the social sciences?" might not have a meaningful answer. Furthermore, the levels mindset invites poorly justified causal assumptions. For example, the beliefs that there is some explanatorily privileged level or that causes and effects must be at the "same level" or of the same granularity lack independent justification. Third, thinking about the micro-macro problem in terms of levels suggests solutions that are distractions from the point of view of the development of substantial social scientific theories. The conceptualization of the micro-macro problem has invited philosophers of social science to import conceptual tools from philosophy of mind. There has been a hope that concepts like supervenience, realization, and downward causation could help make sense of the micro-macro problem or the issues related to methodological individualism. However, this has been entirely unhelpful. The relation between micro and macro is not analogical to the relation between mind and brain. Furthermore, the debate has turned a substantial theoretical challenge into a philosophical puzzle that does not need any concrete social scientific concepts. Defending Levels 09:00AM - 11:45AM
Presented by :
Carl Craver, Washington University Eliminativism about levels is an over-reaction to a real problem that demands instead principled pluralism. Levels eliminativism is motivated in part by the recognition of systematic failures of entailment between seemingly related but distinct ways of talking about levels. Levels pluralism, in contrast to eliminativism and to a facile, “anything goes” attitude, recognizes the distinct roles that, as Havstad calls it “leveling,” plays in the context of different intellectual practices and pursuits. After clarifying what I mean by saying that level talk is metaphorical, I suggest some basic questions (the relata, relations, and placement questions) that can help to diagnose the sense of level at play in the context of distinct scientific and philosophical. As incremental progress toward that end, I distinguish contexts of practice involved in explaining natural phenomena (levels of organization), in describing a system from different vantage points (e.g., Marr’s levels; the personal-sub-personal distinction), and in describing clusters of scientific activity directed at items in a given size scale (call these Feynman levels). I show that the sense of level in each of these distinct contexts a) plays a useful scientific role (with the possible exception of Feynman levels, as I’ll explain), and b) answers the relata, relations, and placement questions differently. This not only stands as a display of the sort of pluralism in the levels metaphor we find operating in science and philosophy (itself a curious social/psychological phenomenon) but recommends significant caution against running these constructs together. Unified Leveling, Disparate Levels 09:00AM - 11:45AM
Presented by :
Joyce C. Havstad, University Of Utah Philosophical critique of scientific levels or candidate level-systems often blends into critique of the activity of leveling. Here I defend leveling, the activity, with no commitment to any given scientific system of levels or the notion that within a given scientific system there is necessarily one best way to level that system. After looking at instances of leveling across the life sciences, I conclude that to level something is to either conceive of that thing in a more inclusive context than that of just itself, or conceive of that thing as providing, for certain other things, a more inclusive context than just what is provided by each of those things by themselves. I also notice there are different kinds of context in which one can level: when leveling, the varying context might be spatial, temporal, compositional, conceptual, detail- or information-oriented, functional, or more. We like to level in lots of different ways, and to level lots of different things. Yet I still think it possible to defend a concept of leveling which respects the different and diverse outcomes of this conceptually coherent leveling activity. My primary aim is to make sense of different kinds of level talk—all of which rely on a concept of leveling, and seem worth saving (to me): for instance, talk of wholes being on a different compositional level than their parts; talk of first-order and second-order sentences being on different referential levels from one another; talk of cellular organelles, cells, and organs being on different spatial levels; talk of ages, epochs, periods, and eras being at different temporal levels; talk of tokens and types, instantiations and abstractions, thick and thin descriptions all being on different conceptual levels; and more. Secondarily, I hope to successfully relate leveling in the context of abstraction to leveling in these other contexts. When we conceive of a thing in a more inclusive context than just itself, we generally talk of having gone up a level. When we conceive of a thing as providing, for certain other things, a more inclusive context, we generally talk of having gone down a level. But “up” and “down” are just relational signs that indicate movement in opposing directions from one another. Moving up to a higher level in the sense of abstraction is often more inclusive in the sense of number of things included, but less inclusive in the sense of amount known about or ascribable to each thing included. Moving down to a lower level in the sense of abstraction is often less inclusive in terms of the number of things rightfully included in that new context but more inclusive in the sense of details attributable to each member of the set. This is a case where the relevant signs might flip, depending on what context or dimension is being emphasized, because there is often a negative correlation between amount of detail and extent of abstraction. | ||
09:00AM - 11:45AM Smithfield | Expanding the Frontier Between Philosophy of Science and Bioethics Speakers
Rachel Ankeny, University Of Adelaide
Michael Deem, University Of Pittsburgh
Lucas Matthews, Columbia University
Lauren Wilson, University Of Minnesota
Moderators
Andrew Evans, Graduate Student, University Of Cincinnati Philosophers of science have a critical role to play in analyzing technical scientific concepts underlying pressing ethical debates, including informed consent, scientific racism, and human genome editing. Growing awareness of a connection between philosophy of science and bioethics raises an important question: How would scholarship progress if philosophers of science engaged more deliberately with issues that inform bioethical debates? This symposium will demonstrate a variety of ways in which collaborations and cross-disciplinary conversations between bioethicists and philosophers of science can be mutually beneficial in the areas of genomics and genetics. | ||
09:00AM - 11:45AM Board Room | Mind the Gap: Concepts in Quantum Mechanics, Quantum Chemistry, and Standard Chemistry Speakers
Robin Hendry, Speaker, Durham University, UK
Andrea Woody, University Of Washington
Vanessa A. Seifert, University Of Athems
William Goodwin, University Of South Florida
Moderators
Juan Camilo Martínez González , National Council Of Scientific And Technological Research This symposium investigates the relationships between the concepts of standard chemistry (e.g. bonds or molecular structure) and quantum mechanical reconstructions of these ideas. Though in many cases there is no straightforward reduction of these standard concepts into quantum mechanical terms, still quantum mechanics, through the interfield theory of quantum chemistry, has had a profound impact on the development of standard chemistry. Therefore, understanding the mechanisms by which, and the constraints under which, quantum mechanics can be recast and assimilated with standard chemical concepts is crucial to appreciating how these conceptually distinct practices have managed to interact so fruitfully. This fruitful interaction and its philosophical implications are thus the proposed focus of this symposium. Strategic Constraints on Interacting Practices: Conceptual Accommodation in Quantum Chemistry 09:00AM - 11:45AM
Presented by :
William Goodwin, University Of South Florida Broadly speaking, this paper shows how the interaction between scientific practices may be subject to strategic constraints—that is, fruitful interaction between practices may require finding ways to adapt one practice to the strategies informing the other. One way, therefore, that the concepts within an interfield theory develop is by recrafting concepts originating in one practice in terms compatible with the strategies guiding the other. This sort of conceptual accommodation is often a prerequisite to fruitful interactions between practices initially directed toward different sorts of aims. I explore these general themes by considering both the strategic constraints operative in the interfield theory of quantum chemistry as well as the conceptual accommodation directed to meet them. To accomplish this, the paper supplies an account of the constraints put on the explanatory concepts of organic chemistry by the aims that have shaped the field. Because of the guiding interest in synthesis, I argue, the explanatory concepts of organic chemistry support a compositional strategy. This is how they manage to be outward-looking and thereby facilitate the reasoning by structural analogy crucial to crafting plausible syntheses of novel compounds. What makes a concept or classification “adequate or suitable” for use in organic chemistry is, therefore, its compatibility with the compositional strategy adopted by the discipline. Because quantum mechanics is responsive to different interests, quantum chemists have had to find ways to recreate this compositionality within their own framework in order to become broadly useful in support of organic chemistry. Orbital diagrams, for example, have been a useful tool for organic chemists at least in part because they are compatible with this compositional strategy. Several other ways of ensuring compatibility with compositionality that do not rely on diagrams or spatial representations have also been employed by quantum chemists. For instance, by introducing ‘localizing assumptions’ or using an ‘independent particle model’ quantum chemists have found ways to build their representation of a molecule out of pieces that can potentially recur in descriptions of distinct chemical structures. This allows insights generated in the account of one molecule to be potentially applicable to a class of other structures that contain some of the same structural components. In this way, quantum chemistry can potentially contribute to the project of understanding novel molecules in terms of structural analogies between their components and known compounds. I hope that by identifying compatibility with compositionality as the key feature of the explanatory concepts in organic chemistry, the strategic constraints operative within quantum chemistry can be brought to light. This in turn makes it possible to appreciate one significant form of conceptual accommodation that takes place in this interfield theory. Quantum mechanics and the chemical bond 09:00AM - 11:45AM
Presented by :
Robin Hendry, Speaker, Durham University, UK Does quantum mechanics accommodate chemical intuitions about bonds, or sweep them away in favour of something new? In this paper I look at foundational issues arising for two revisionary efforts to embed the chemical bond into quantum mechanics. Energy and structure: Since the 1930s, a standard way to explain the stability of a molecule within quantum chemistry has been to use correlation diagrams based on Molecular Orbital (MO) theory. Such diagrams allow estimates of the total electronic energy of a molecule and comparison with the separated atoms: H2 exists because its energy is lower than that of two isolated hydrogen atoms, while He2 does not exist because its energy would be higher than that of two isolated helium atoms. Hendry (2008) has distinguished the structural and the energetic conceptions of the chemical bond. The structural conception tries to identify the explanatory role of the covalent bond in chemistry, then work out what realises that role. The energetic view draws on physical rather than chemical explanation: how the stability of molecules is explained within quantum mechanics. Taking MO theory as a guide, this suggests that facts about bonds are determined by facts about bonding at the level of whole molecules. This would be a radical revision of a longstanding chemical concept, but perhaps quantum mechanics forces that on us: Weisberg (2008) has challenged the quantum-mechanical adequacy of the more retentionist structural view. The Quantum Theory of Atoms in Molecules (QTAIM): The late Richard Bader (1990) analysed the distribution of electron density within molecules, arguing that it is possible to construct well-motivated quantum-mechanical correlates of some classical chemical concepts, including bonds between atoms. However, Bader thought that QTAIM provides not bonds but ‘bond paths’ and that chemical intuitions which cannot be justified by quantum mechanics should be discarded. Esser (2019) has argued further that QTAIM provides a third ‘interactive’ conception of bonding. In this paper I will respond to Weisberg’s challenge to the structural conception, but also issue my own challenge to the energetic view: the identification of a bond with a change in energy breaks down for anything more complex than diatomic species. I also argue that Bader’s dismissal of classical ideas about bonds as mere ‘chemical intuition’ is too quick: this is a well-developed body of theory that has made important contributions to the development of chemistry. It should be given a robust defence, not a dismissal. Descriptive and Prescriptive: Reflecting on Molecular Concepts 09:00AM - 11:45AM
Presented by :
Andrea Woody, University Of Washington When philosophers investigate molecular concepts to determine whether particular accounts of molecules are satisfactory, they face a methodological challenge: they must make assumptions regarding the role(s) such concepts are intended to play. I suggest recognizing a distinction between explanatory and ontological concepts. While scientific theories intimately intertwine the explanatory and the ontological, they should not be equated. While all panelists in this symposium are discussing molecular concepts, I will argue we’re not all talking about the same thing. Recognizing this may dispel some apparent tensions that have seemed inherent in attempts to locate, or recreate, chemists’ molecules in quantum theory. Molecules as Quantum Objects 09:00AM - 11:45AM
Presented by :
Vanessa A. Seifert, University Of Athems In quantum mechanics no specific molecular structure is assigned from first principles. Franklin and Seifert (2020) argue that this is due to the measurement problem. I explore the implications of this to the metaphysical understanding of structure. Specifically, I propose two metaphysical views: the dispositional and relational view. According to the first, isolated molecules maintain their structure only as dispositions. On the second, structure comes about only in relation to some environment. I evaluate how these two views match with the metaphysical implications of realist interpretations to quantum mechanics and conclude that both views radically revise our understanding of structure. | ||
10:00AM - 10:15AM Virtual Room | Coffee Break |